text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Bringing Trust to Autonomous Mobility
Last decade has been characterized by a huge advancement in the field of automated and connected transport. However, fully autonomous systems still need a lot of effort in order to be applied in transportation. Meanwhile, mixed traffic environments with semi-autonomous vehicles is becoming a norm. In such conditions, vehicles are passing the dynamic driving task back to the human by sending to drivers Requests to Intervene (RtI). At the same time, there is a need to evolve driver’s training in order to be able to safely use semi-automated vehicles, whereas driver intervention performance has to be made an integral part of both driver and technology assessment. Furthermore, the ethical implications of automated decision-making need to be properly assessed, giving rise to novel risk and liability analysis models. In this conceptual paper we present our vision to maximise the safety, trust and acceptance of automated vehicles. To achieve that, we propose an assessment framework to evaluate different technologies involved in Automated Driving Systems (ADS).
I. INTRODUCTION
This decade has brought into our daily lives autonomous vehicles [1], while most of the well-known automakers have already began executing their plans to commercially release autonomous vehicles by 2020-2021 [2]. However, current projections of market analysts, including Blackrock [3] and UBS [4], indicate that broad adoption of fully autonomous vehicles might be decades away. This in turn suggests that the human factor will remain essential for the safety and performance of road transport in the forthcoming decades, mainly for two reasons: a) due to the necessary driver-vehicle interaction in cases where the boundaries of the Operational Design Domain (ODD) of an Automated Driving System (ADS) are being reached, and b) because of the co-existence of fully-, semi-and non-autonomous vehicles, which is likely to be raising unexpected challenges.
Central to the human role in the Connected Automated Driving (CAD) is the transition from automated to manual driving mode. This might be system-initiated, whereby the ADS issue a Request to Intervene (RtI), i.e. notifies the human driver that he should promptly take-over control and perform the Dynamic Driving Task (DDT) fallback [5]. This can happen when the ADS detects a system limit, e.g. because of sensor malfunction, extreme weather conditions, appearance of evolving accident scenes, unexpected road blocks, hazardous traffic code violation from another vehicle, falling of goods, etc. However, the transition can also be userinitiated, e.g. to provide a corridor for emergency vehicle access, or follow hand signals given by a traffic enforcement officer [6].
Evidently, in such a dynamic driver-vehicle interaction scheme, several challenges arise. First, in parallel to the detection of system limits, the driver's availability to intervene has to be evaluated, through continuous Driver State Monitoring (DSM). Second, the transition's success has to be ensured by proactively allowing sufficient lead time and utilising appropriate and comprehensible Human-Machine Interfaces (HMIs) that maximise situation awareness and intervention performance. Third, driver training has to evolve to meet the safety challenges of "driving" an automated vehicle. Fourth, measuring of the driver intervention performance as well as of ADS user acceptance, depending on different levels of automation, take-over requests, etc., becomes essential. Moreover, the implications of automated decision-making from a legal or ethics perspective have to be examined, and risk models (e.g. addressing liability issues) for the co-existence of various automation levels have to be developed. Notably, there is a lack of standards, pilot results and established practices in the aforementioned fields, such as for HMIs in automated driving and for takeover performance assessment. Running parallel to these challenges is the dimension of trust: not interpersonal trust but trust in technology and, specifically, in automation.
In this paper we present the vision of Trustonomy (a neologism from the combination of trust and autonomy) which is to raise the safety, trust and acceptance of automated vehicles by helping to address the aforementioned challenges through a well-integrated and inter-disciplinary approach. The rest of the paper is organised as follows: in Section II we present the Trustonomy objectives and related work; Section III includes our proposed approach; the system architecture is presented in Section IV, while Section V concludes the paper.
II. TRUSTONOMY OBJECTIVES AND RELATED WORK In order to address the challenges mentioned in the previous section, we have identified six specific objectives which are depicted in Fig. 1.
A more detailed description of the envisioned objectives is presented below: • Develop a Methodological Framework for the operational assessment of different DSM systems: DSM plays a crucial role especially for L3 vehicles, in which humans are in the loop, being involved in driving operations. The most critical scenario is represented by the RtI, in which the human driver has to take control of the vehicle. The DSM has to establish the driver status and his ability to safely accomplish it. Relevant research [7] has pointed out that, when humans do not pay attention at all when the vehicle is driving itself, they could not shift attention quickly enough to safely take control of the vehicle. Continuous monitoring of the driver is a possible solution able to mitigate this problem, adapting the RtI procedures, in particular during the takeover phase. The monitoring can be achieved by various methodologies, such us by monitoring the eye movement of the driver [8] or by monitoring driver's blood pressure [9]. The DSM can trigger the appropriate notifications and warnings to the driver, when he decides to resume the manual control. In [10] the researchers agree that in the next few years, there will be manual driven vehicles with several autonomous features requiring a short notice intervention of the driver, therefore, a DSM system is necessary to support time-efficient transition of control.
• Develop a Methodological Framework for the operational assessment of various HMI designs: There is still a lack of understanding regarding methods to evaluate HMI in CAD vehicles. One perspective argues that CAD vehicles could have an HMI design similar to the one used in conventional (L0-L2) vehicles. These systems only help the driver to make adequate decisions, and the driver is still responsible for the decisions he made. Even though L3 systems still require full-time supervision of the driver, the HMI must limit the effects of driver periodical inactivity or driver's fatigue. L4 systems, which allow to drive the vehicle mostly in automatic mode, need to support the driver in resuming the driving task, by addressing problems like lack of attention, low situation awareness and skill reduction. According to the HATRIC project [11], there are three particular reasons for working with HMI for automation in relation to safety: (i) optimizing hand-over of control, (ii) minimizing negative effects of automation induced behaviour, and (iii) increasing usage by means of improved user experience. The fact is that HMI design strongly affects the driver's sense of safety: since perceived safety of the user highly correlates with his trust in technology [12], it is crucial to develop a framework for HMI assessment and to identify major factors affecting driver's trust in autonomous vehicles. From a human factor perspective, the design of automation systems so that drivers fully understand the capabilities and limitations of the vehicle and maintain situational awareness of what the vehicle is doing (and when manual intervention is needed) is currently a fundamental issue [13].
• Develop an ethical automated-decision-support framework, covering liability concerns and risk assessment: Trustonomy investigates liability concerns, compatible insurance models, ethical decision-making and auditability mechanisms when ambiguities arise. For instance, who is to blame when an RtI is not successfully completed, resulting in an accident? What is the (legally and ethically) suitable course of action in a situation whereby the ADS is about to reach its system limit, but driver state monitoring suggests that a driver intervention would also fail? Moreover, Trustonomy investigates precursors and forecasting models to issue early RtI warnings and provide more time to the human driver to intervene. In addition, it generates emergency trajectory possibilities in case of ambiguities (i.e. when an accident is impossible to avoid but multiple options exist). In order to achieve that, there is need for quantitative risk assessment of potential threats. Beyond the simple risk matrices traditionally used, with well-known defects [14], Trustonomy employs risk maps, following a paradigm already experimented in aviation, leading to improved safety results and costs [15], while a new approach of adversarial risk analysis [16] is used to encounter threats related to malicious attacks to the systems and algorithms regulating automated driving.
• Develop novel Driver Training Curricula for human drivers of ADS: Numerous EU-funded projects have aimed at designing real-life/simulation-based training modules targeting Advanced Driver Assistance Systems (ADAS) [17] and creating new training methodologies to cope with the rapid evolution of active safety systems [18], nevertheless, no Pan-European actions were made to fully acknowledge and include the training on handling these systems into the training curricula. This is especially important in the context of the OEM driving automation-targeted technological race, which is going to end up in deploying L3 vehicles already in 2018 [19] and building the capacity for L4 systems deployment in a 5-year horizon. If this pace is not slowed down by international legal restrictions, it will directly influence the driving behaviour of people, who were so far used to drive in the traditional, non-automated manner. The need for reinventing driver training in the ADS context has been repeatedly underlined by the EU experts [20,21]. Different research studies show that even the use of basic driving assistance systems like Adaptive Cruise Control (ACC) affect the driver's cognitive abilities and overall performance. Although it may reduce workload and stress while being on the road, the situation awareness becomes negatively affected at the same time [22]. Drivers have a tendency to over-rely on the capabilities of such systems and negatively adapt to the new, less demanding conditions.
• Define a Driver Intervention Performance Assessment (DIPA) Framework: It is assumed that driver will become no more than a passenger during periods of automated driving mode performed by CAD vehicle. While driving needs constant monitoring, analysing and decision making to ensure safety [23], driver's role during autonomous driving will become out-of-the-loop, and he may not have enough information to maintain control on operational and tactic level of driving [24]. A driver may be under the influence of two main effects of disengagement: distraction and fatigue. Both may have negative impact on a driver, fatigue might i.e. diminish attention capabilities, and distraction might cause i.e. sharing attention between two tasks [25]. Furthermore, if the driver performs non-driving related task, the effects of its influence may last even up to 15s after its cessation [26]. This provides evidence that driver monitoring will be crucial in at least two phases of take-over: (i) pre-RtI phase when the driver will need to gain information of the current road situation, planned manoeuvres, etc.; (ii) take-over phase, when the driver will start to control the vehicle himself.
• Measure performance, trust and acceptance (simulations and field trials) of human drivers of ADS: There are a number of challenges associated with the concept of trust, with particular reference to the trust a driver has in an ADS. Unlike other "driver states" such as fatigue [27] or high workload [28], there exists no reliable physiological measure of trust. Lack of trust in ADS may induce anxiety in certain situations, resulting in increases in arousal that are able to be detected by physiological indicators, but still, it is difficult to correlate this observed arousal with reduced trust. The challenge is to design ADSs which are trusted appropriately: drivers have to trust them enough to glean all the promised benefits of, for example, traffic efficiency. Several studies have shown that trust is a key determinant for the adoption, intention to use and reliance on automated systems [29]. For sure, "operators tend to use automation that they trust while rejecting automation that they do not" [30]. On the other hand, over-reliance on automation is also not desirable and may lead to situations whereby drivers cognitively distance themselves so far from the driving task that they encounter difficulties in the transition periods. Such over-reliance was cited as one of the causes of the Tesla crash in 2016, with the NTSB noting "the operational design of the Tesla's vehicle automation permitted the car driver's overreliance on the automation, noting its design allowed prolonged disengagement from the driving task and enabled the driver to use it in ways inconsistent with manufacturer guidance and warnings" [31]. The trust that an operator has in a system is not binary; it can be situational as well as dynamic [32] and the challenge is to design an ADS that engenders trust at an appropriate level for any given situation.
III. APROACH
To address the challenges identified in the scope of intervention performance assessment, user trust and acceptance, Trustonomy adopts an integrated approach, where ADS-related state-of-the-art or emerging technologies and solutions are tested and evaluated with real users and nontechnical experts.
In the following paragraphs we further analyse the proposed approach, focusing on the objectives described in the previous section.
A. Driver State Monitoring
Trustonomy investigates the suitability and personalisation potential of various (combinations of) DSM techniques, by measuring and inferring: (i) sensory state, which affects the ability of the human subject to perceive the RtI and the surrounding contextual conditions; (ii) motoric state, in order to identify a body state that can be characterised as out-of-driving position; (iii) cognitive state, which affects the ability for applying attentional resources to perform the intervention; (iv) arousal level, which deteriorates when there is nothing to do for a long time; (v) emotional state, which is also considered explicitly, as it cannot be presupposed that rational behaviour lies at the heart of all decisions and actions.
B. HMI Design Factors
Trustonomy investigates the suitability and personalisation potential of various multimodal HMIs for maximising driver intervention performance, trust and acceptance, including: a) Visual factors (position and size of visual indicators, icons and colours, blinking); b) Auditory factors (loudness, tonal pattern, voice); c) Haptic factors (bodily part, i.e. hand, foot, thigh, vibration pattern, mid-air HMI feedback); d) Timing of onset of RtI; e) Content of HMIs, ranging from automation mode change (e.g. temporal function halt, malfunction), RtI message types (e.g. "please take over!"), intervention action indications (e.g. "hands on wheel!"), to HMIs to display system state and HMIs to indicate system reliability, etc.
C. Risk Assessment
Trustonomy aims at identifying first a detailed catalogue of threats that might affect automated/semi-automated driving and undermine public trust and confidence in this transportation means. Based on such catalogue, it shall undertake a risk matrix approach to screen the most worrisome threats and then perform a detailed quantitative analysis over such list, producing a risk mapping with a full quantitative risk assessment model. Adversarial risk analysis models will be developed to support automated driving, helping in better forecasting how other road users behave, and underpinning improved automated decision-making in driving. Finally, the robustness of algorithms supporting automated driving towards attacks will be explored; as an example, an artificial vision algorithm could be hacked and a STOP sign of a road could be misinterpreted leading to chaotic situations. This will lead to the assessment of such algorithms from an adversarial machine-learning perspective.
D. Early Warning
Based on the risk assessment described above, Trustonomy will define and study precursors of such threats and build forecasting models to issue the RtI warnings as early as possible, in order to provide more time to the human driver to intervene. Essentially, several relevant signals will be tracked, monitored and forecasted based on dynamic models against several thresholds leading to RtIs. Such forecasts will be issued several instants ahead in such manner that if the threshold is expected to be reached by the prediction intervals, the RtIs would be issued.
E. Trajectory Planning
In the case of emergency trajectory planning, the generation of trajectories will be done by comparing different planning algorithms such as parametric planning or graph search planning, with the objective of mitigating the accident consequences. The planning method will be multi-objective, to generate a set of optimal trajectories according to cost functions depending on the accident consequences (fatalities, social cost, financial cost, etc.); genetic algorithms will be used to determine these planned trajectories. A panel composed of experts and regular drivers, cyclists and other road users will be asked to select which is the best trajectory from the ones proposed by the algorithm; this ethical question will then be partially solved by such democratic vote.
F. Driver Training (curricula, methods, material)
Trustonomy identifies the need to prepare newly trained drivers for higher (L3-L4) stages of driver automation in which efficient driver-vehicle interaction will be the key to increasing road safety. To this end, a thorough road-safety targeted risk mapping with respect to both ADS performance as well as driver reception and psycho-motoric performance will be made. This will allow to identify specific priorities to be covered in the course of the training. For each of the identified problems, an individual training method will be developed, and applicable ICT-based training tools will be selected, so that a full training curriculum for human drivers of ADS is composed and tested through real-life piloting (involving passenger vehicles, light/heavy freight, public transport, etc.).
G. Driver Intervention Performance Assessment
DIPA involves the definition of relevant objective measures to assess the quality of intervention performance, such as driver take-over time from onset of RtI, driver intervention time, control stabilisation time, remaining action time, as well as subjective measures for the quality of intervention performance. It consists of a set of measures to determine whether the driver is able to perform an intervention in a safe way or it is worth maintaining the control in ADS and perform a Minimal Risk Manoeuvre.
H. Driver Trust
One of the most important goals of Trustonomy is to assure that automated vehicles are being trust by drivers. A suite of driving simulator studies will be carried out to investigate how a range of users of automated vehicles learn to trust the key features, the situations in which that trust diminishes and how degraded levels of trust can be boosted in an accelerated manner. A toolbox of driving scenarios will be developed which can be used to measure, maintain and, where necessary, increase levels of trust to the point where maximum benefits of automation can be accrued, without the driver becoming over-reliant. Potential research questions include: what aspects of automation (sub-functions) are more susceptible to loss of trust? Which functions can "degrade gracefully" without substantial loss of trust? Can the ADS be "programmed" to self-evaluate its reliability and thus predict the real-time trust that an operator has in it? What interventions could help an operator regain trust?
IV. TRUSTONOMY OVERALL ARCHITECTURE
In order to perform the assessment of the emerging technologies presented in the previous section, we introduce the conceptual architecture depicted in Fig. 2. As depicted, the approach followed for the definition of the conceptual architecture was a mixture of top-down and bottom-up approaches. The processes/actions used for the validation of the conceptual architecture were derived from the user requirements, which were in turn used for the definition of the use cases (top-down). Concurrently, the functionality of the different components was initially defined using the findings from the state-of-the-art review and was adjusted to meet any requirements that were not originally taken into consideration (bottom-up).
As it has been previously highlighted, the project produces outcomes in different ADS-related design domains. Fig. 2 illustrates a conceptual design of the Trustonomy architecture. The upper part of the figure depicts the Trustonomy Frameworks. The different frameworks, namely DSM Assessment Framework, HMI Design Assessment Framework, Automated Decision Support Framework, Driver Training Framework, Driver Intervention Performance Assessment Framework and Trust and Acceptance Measurement Framework are the main outcomes. These frameworks lead to stand-alone tools that can be used for the assessment and evaluation of ADS specific parameters related to performance, risk and trust assessment. Obviously, the domain of analysis of each framework is different and, for this reason, the resulting tools are independent and can be used as stand-alone solutions. To perform the analysis and assessment, the Trustonomy frameworks are based on data that are collected real-time on the Trustonomy pilot sites or on datasets that have been prerecorded during specific scenarios of interest. The Trustonomy pilot sites involve different conditions (e.g. road conditions) and different vehicles (public transport buses, passenger vehicles, freight transport cars and driving simulators). Additionally, multiple configurations and technologies (sensors, DSMs, HMIs) are deployed within the different vehicles to allow the monitoring, study and evaluation of the vehicle state and the driver condition and behaviour.
To support the management of the multiple data streams collected from the pilots and streamed to the data analytics processes and applications, Trustonomy is based on a data management layer that acts as the middleware between the multiple data sources of the project and the Trustonomy Frameworks. The same set of tools will be used for the management of various pre-recorded datasets that will be used for the analysis performed by the Trustonomy Frameworks. The above lead the development of the initial specifications for the individual Trustonomy tools with primary aim to ensure the availability of data sources that each component requires in order to function. As part of this activity, input and output data sources for each component were identified and an overall initial conceptual architecture was drawn up.
Finally, as illustrated on the right part of Fig. 2, a Trials Support Tool will be developed, aiming to assist the execution of the Trustonomy trials. Fig. 3 presents the functional architecture of Trustonomy, with the individual Trustonomy frameworks and their internal functions. The Trustonomy architecture consists of the following frameworks:
A. Functional Architecture
• DSM Assessment Framework: Assess the performance of one or more DSMs.
• HMI Design Assessment Framework: Assess the performance of different HMI designs.
• Automated Decision Support Framework: Undertake the decision of issuing a Request to Intervene or preserve an autonomous driving mode and, if so, plan the driving decisions (e. g., trajectory) accordingly.
• Driver Training Framework: Assess and validate the driving training curricula.
• Driver Intervention Performance Assessment Framework: Assess the driver's ability to intervene in case this is needed.
• Trust and Acceptance Measurement Framework: Produce methodologies to assess trust and acceptance in ADS.
The Data Management layer is not amongst the main outcomes of the project, but it is a layer encompassing functions related to data management procedures, acting as an enabling technology for the Trustonomy Frameworks.
V. CONCLUSIONS This paper elaborated upon the Trustonomy vision on maximising the safety, trust and acceptance of automated vehicles. The key benefit of the proposed approach is that it encounters all the challenges that arise in the dynamic drivervehicle interaction scheme that we see in today's mixed traffic environments. Specifically, an emphasis is given on the driver state monitoring systems, the application of human-machine interfaces, the use of risk assessment for tracing potential threats, the necessity of reinventing driver training material for autonomous vehicles, the measuring of driver's intervention performance, and finally the necessity to measure performance, trust and acceptance. The conceptual and the functional architecture of the envisioned system have been presented, while further research activities include the implementation of the individual Trustonomy frameworks and then the testing and validation of the discussed approach in extended pilots in fully operational environments, evaluating the performance and impact of the proposed approach. These activities will be carried out through the duration of the Trustonomy project. | 5,475.4 | 2020-11-18T00:00:00.000 | [
"Business",
"Computer Science"
] |
Frequency domain bootstrap methods for random fields
Abstract: This paper develops a frequency domain bootstrap method for random fields on Z2. Three frequency domain bootstrap schemes are proposed to bootstrap Fourier coefficients of observations. Then, inversetransformations are applied to obtain resamples in the spatial domain. As a main result, we establish the invariance principle of the bootstrap samples, from which it follows that the bootstrap samples preserve the correct second-order moment structure for a large class of random fields. The frequency domain bootstrap method is simple to apply and is demonstrated to be effective in various applications including constructing confidence intervals of correlograms for linear random fields, testing for signal presence using scan statistics, and testing for spatial isotropy in Gaussian random fields. Simulation studies are conducted to illustrate the finite sample performance of the proposed method and to compare with the existing spatial block bootstrap and subsampling methods.
Introduction
Following Efron's influential paper ( [8]), development of bootstrap resampling procedures has been growing rapidly. Bootstrap resampling constitutes a powerful tool for approximating certain characteristics of a statistic, that cannot be easily calculated by analytical means. In addition, bootstrap methods require no explicit knowledge of the underlying dependence mechanism, or the marginal distribution of the observations. These user-friendly features make bootstrap resampling popular for statistical inference.
In recent decades, various resampling methods for dependent data have been proposed. For time series data, block bootstrap and frequency domain bootstrap are two important classes of bootstrap procedures. For block bootstrap methods, include moving block bootstrap ( [25] and [29]), non-overlapping block bootstrap ( [3]), circular block bootstrap ( [41]), and stationary block bootstrap ( [43]). Despite its simplicity, the accuracy of a block bootstrap estimator critically depends on the block size employed. On the other hand, frequency domain bootstrap methods use the periodogram of the data to derive bootstrap approximations for a class of estimators called ratio statistics, see [5], [10] and [22] for details. [21] proposed the time frequency toggle (TFT) bootstrap for time series, which directly resamples the discrete Fourier transform instead of resampling the periodograms. Unlike periodograms, the bootstrapped discrete Fourier transforms can be transformed back to generate bootstrap resamples of a time series. Thus, TFT bootstrap not only comprises the classical frequency domain bootstrap methods, but is also applicable to statistics that are based on the time domain representation of the observations, including the CUSUM statistic for change-point detection, and the least-squares statistic for unit-root testing. By combining a time domain parametric bootstrap and a frequency domain nonparametric bootstrap, [18] extended the autoregressive aided periodogram bootstrap suggested by [22] and proposed a multiple hybrid bootstrap for linear processes which can generate bootstrap resamples in the time domain. For reviews of resampling methods in time series, see [2], [36], [26], and [38].
Apart from time series, subsampling and resampling methods for spatial data have also become increasingly popular in past decades; see [6] for a brief overview. [16] used a block resampling procedure to bootstrap spatial data. [44] developed a subsampling method for random fields. [42] considered a block bootstrap method for homogeneous strong mixing random fields. [46] used a resampling method to estimate variance for statistics computed from spatial data. [40] proposed subsampling methods for statistical inference in irregularly spaced dependent observations. [28] used spatial subsampling for least squares variogram estimation. [34] and [35] developed the optimal block sizes for spatial subsampling and bootstrap methods. However, they are only applicable to variance estimation. [30] proposed a bootstrap method for Gaussian random fields under fixed domain asymptotics. See [26] for a comprehensive review. Recently, [32] proposed an AR sieve bootstrap for linear random fields. To the best of our knowledge, the development of spatial bootstrap methods focuses mainly on the block bootstrap type methods, and a frequency domain bootstrap method for possibly nonlinear random fields remains absent from the literature.
In this paper, we develop a frequency domain bootstrap method for random fields on Z 2 . The basic principle of the proposed method is to bootstrap Fourier coefficients of observations, and then inverse-transform the resampled Fourier coefficients to obtain bootstrap samples in the spatial domain. By resampling the discrete Fourier transforms instead of the periodograms, we can handle situations where the statistics of interest are not expressible by periodograms, such as scan statistics for testing presence of spatial signal, see Section 6. The proposed frequency domain bootstrap method is similar in spirit to the TFT bootstrap of [21] for time series. However, resampling the Fourier coefficients in spatial data is not as straightforward as that in time series due to an additional rotational symmetry of the coefficients. In addition, applications of the spatial frequency domain bootstrap method, such as testing for signal presence and testing for spatial isotropy, are very different from applications of the time series counterpart, such as change-point detection and testing for unit roots. Moreover, to develop the bootstrap theory in spatial context, we establish an invariance principle for the bootstrap partial sum process indexed by a classical example of Vapnik-Chervonenkis-classes (V C-classes) of set [0, 1] 2 . The results can be generalized to other V C-classes. The proofs of the asymptotic results require different ideas and techniques compared with that for the time series counterpart in [21].
We propose three resampling schemes for bootstrapping the Fourier coefficients of spatial processes in Z 2 . We show that the resulting bootstrap sample correctly captures the second-order moment structure for a large class of random fields. The results are illustrated by applications to constructing confidence intervals of correlograms for linear random fields, testing for the presence of signal, and testing for spatial isotropy in Gaussian random fields. Simulation studies are performed to explore the finite sample performance of the proposed method and to compare with existing spatial block bootstrap and subsampling methods.
This paper is organized as follows. Section 2 provides the problem setting and reviews the spectral theory for spatial processes in Z 2 . In Section 3, three resampling schemes for the Fourier coefficients are proposed to develop bootstrap procedures for spatial processes in Z 2 . The main results are presented in Section 4, in which we establish the validity of the bootstrap procedures by showing the invariance principles of bootstrap samples under some meta-assumptions on the bootstrapped Fourier coefficients. Section 5 verifies these meta-assumptions for the three resampling schemes. In Section 6, we introduce some practical applications for the proposed bootstrap method, and simulation studies on comparing the proposed method with existing spatial block bootstrap and subsampling methods are given. Technical proofs of the theorems and lemmas are provided in Appendices A and B.
Problem setting and spectral theory for spatial processes in Z 2
In this section, we describe the problem settings and preliminary about the spectral theory for spatial processes. First, we introduce some notations. For any vector a = (a 1 , a 2 , . . . , For any set G, denote the cardinality of G by |G|. For random variables X ∈ L p , denote the L p norm as X p = (E(|X| p )) 1/p . For any two sequences of real numbers {a n } and {b n }, denote by a n b n when a n = O(b n ) and b n = O(a n ). For any x ∈ R, x is the greatest integer that is less than or equal to x. All vectors are column vectors unless specified otherwise, hence for any a = (a 1 , a 2 , . . . , a q ) ∈ R q and b = (b 1 , b 2 , . . . , b q ) ∈ R q , the dot product between vectors a and b is defined as the vector multiplication a
Settings and assumptions
Let V (t) : t ∈ Z 2 be a stationary random field on a two-dimensional grid with mean μ = E(V (0)). Assume that we have observed {V (t) : t ∈ T } on a rectangular spatial region We impose the following assumptions about the increasing domain asymptotic framework and the underlying random fields for establishing the asymptotic results.
Assumption A.2. The random field V (t) : t ∈ Z 2 is stationary with absolutely summable auto-covariance function γ(·), i.e., j∈Z 2 |γ(j)| < ∞, where γ(j) = Cov(V (0), V (j)). In this case the spectral density of the random field exists and can be expressed as where G(·) is a measurable function and {ε i } i∈Z 2 is an i.i.d. random field. Let { ε i } i∈Z 2 be an i.i.d. copy of {ε i } i∈Z 2 . Define the coupled version of V (j) as Assume that there exists some p > 0 such that V (j) belongs to L p and Δ p := is the p-stable condition for random fields defined in [9] in which central limit theorems and invariance principles are established for a wide class of stationary random fields. We will discuss the invariance principles in detail in Section 4. The next assumption is a geometric-moment contraction (GMC) condition: Assume that there exist α > 0, C > 0 and 0 < ρ = ρ(α) < 1 such that for all Assumption A.5 is the spatial extension of the geometric-moment contraction condition for time series, see [45]. This condition is fulfilled for short-range dependent linear random fields with finite variance, and a large class of nonlinear random fields such as nonlinearly transformed linear random fields, Volterra fields and nonlinear spatial autoregressive models, see [9] and [7].
Fourier coefficients of spatial processes in Z 2
Denote the sample mean as V T = |T | −1 t∈T V (t), and the centered observa- . Note that the dependence of Fourier coefficients x(j) and y(j) on T is suppressed for notational simplicity. The basic principle of the proposed bootstrap method is to sample the Fourier coefficients of the observations, and then back-transform them to obtain bootstrap samples in the spatial domain. First, we discuss some structural properties of the Fourier coefficients of spatial processes in Z 2 . Note from By using the symmetry property in (2.3), we now partition T as T = N ∪Ñ ∪ M such that the Fourier coefficients defined onÑ are determined by the Fourier coefficients defined on N . Also, the information about the covariance structure and mean of the random field are contained in N and M respectively. Hence, a spatial process can be reconstructed from the Fourier coefficients defined on N and M . In particular, when d 1 , d 2 are both odd, define When d 1 is odd and d 2 is even (similar for d 1 is even and d 2 is odd), define When d 1 , d 2 are both even, define Then, using the symmetry property in (2.3), subsetÑ of T is defined as Note that the Fourier coefficients at j ∈Ñ can be completely determined by the Fourier coefficients at j ∈ N . From (2.2), for all c ∈ R, the Fourier coefficients of {V (t) − c : t ∈ T } at j ∈ N are the same. In other words, the Fourier coefficients in N andÑ are invariant under additive constants and thus contain no information about the mean. In contrast, all of the information about the mean is contained in the Fourier coefficients (2.4) Table 1 shows some examples to illustrate the partitions of set T under different scenarios. Table 2 summarizes the value of the Fourier coefficients in M . Since in spatial statistics the main concern is the covariance structure of the random field, we focus on bootstrapping the Fourier coefficients in N . The issue of bootstrapping spatial mean is deferred to Section 4.2.3.
Table 2
Fourier coefficients in M contain information about the mean.
Kernel spectral density estimation
We consider the kernel spectral density estimator is the periodogram at frequency λ j . The periodogram can be set to 0 on D since it only contains information about the mean. We impose the following mild regularity assumptions on the kernel function K(·).
Assumption K.4. The quantity K h (λ) in (2.6) satisfies the following uniformly Lipschitz condition: for some constant L K > 0, and where k(·) is defined in (2.7). From the above representations it is clear that for large T , for bounded K(·). By (2.8), if the kernel K(·) is uniformly Lipschitz continuous with compact support, then Assumption K. 4 holds for a small enough h T = (h T 1 , h T 2 ). For infinite support kernels, if K(·) is bounded and continuously differentiable, then Assumption K.4 also holds. Assumptions K.1 to K.4 hold for many commonly used kernels such as uniform kernels K(λ 1 , ).
Frequency domain bootstrap
In this section, we propose three bootstrap schemes in the frequency domain, namely Residual-Based Bootstrap (RB), Wild Bootstrap (WB) and Local Bootstrap (LB), for resampling the Fourier coefficients. Similar bootstrap schemes in time series context are first proposed by [10], [37], and [17] respectively. A bootstrap procedure that produces resamples of spatial processes is then developed in Section 3.2.
Residual-based bootstrap (RB)
In RB, we first standardize the Fourier coefficients to obtain a set of residuals, which consists of approximately i.i.d. normal random variables. Hence, i.i.d. resampling methods can be applied to yield a resample of Fourier coefficients.
Step 1: Estimate the spectral density f by f T , which satisfies Step 2: For the Fourier coefficients x(j) and y(j), j ∈ N , define , s j,2 = y(j) .
For j ∈ N and k = 1, 2, define the residuals s j,k by standardizing s j,k as Note that the residuals s j,k are approximately independent standard normal variables, see Theorem 4.1 of [33].
Step 4: Define the bootstrapped Fourier coefficients by where j ∈ N .
Wild bootstrap (WB)
Compared to RB, the WB further exploits the asymptotic normality of the Fourier coefficients by generating independent standard normal random variables instead of resampling the residuals.
Step 1: Estimate the spectral density f by f T , which satisfies (3.1).
Step 2: Define the bootstrapped Fourier coefficients by where {G j,k : j ∈ N, k = 1, 2} are independent standard normal random variables. For RB and WB, conditions for kernel spectral density estimators satisfying (3.1) can be found in [33] for a large class of random fields.
Local bootstrap (LB)
In contrast to RB and WB, LB does not require any spectral density estimation. Instead, LB makes use of the smoothness of spectral density, which ensures that in a neighborhood of each frequency, the distributions of the Fourier coefficients are nearly identical. Therefore, replicates of the Fourier coefficients can be produced by directly resampling the Fourier coefficients within neighborhoods.
Step 1: Select a symmetric, nonnegative kernel K(·) that satisfies Assumptions Step 3: For j ∈ N , define the uncentered bootstrapped Fourier coefficients by resampling within a neighborhood of j.
Step 4: Define the centered bootstrapped Fourier coefficients by
Bootstrap procedure for spatial processes
With the three bootstrap schemes for the Fourier coefficients, we develop the bootstrap procedure for resampling spatial processes as follows: Step 1: Compute the Fourier Coefficients x(j), y(j) for j ∈ T using Fast Fourier Transform (FFT).
Step 4: Set bootstrapped Fourier coefficients onÑ according to the symmetric Step 5: Use the inverse FFT algorithm to transform the bootstrap Fourier The resulting bootstrap spatial process {Z * (t) : t ∈ T } is real-valued and centered, and can be used for inference on a large class of statistics that are based on partial sums of the centered process {Z(t)}; see Section 6 for examples. Note that since the Fourier coefficients in N can also be uniquely determined by the Fourier coefficients inÑ , it would be technically the same if we interchange the roles ofÑ and N in the above bootstrap procedure. That is, we can first obtain a bootstrap sample onÑ instead of N in Step 3, and then set bootstrapped Fourier coefficients on N instead ofÑ in Step 4 using the rotational symmetry in (2.3).
Remark 3.1.
To compare the computational cost of the proposed frequency domain bootstrap methods and the existing spatial block bootstrap method, we first investigate the number of random data required to be simulated for generating one bootstrap spatial resample. The classical block bootstrap with a block size m 1 ×m 2 typically requires a simulation of ( d 1 /m 1 +1)( d 2 /m 2 +1) random data to generate one bootstrap spatial resample, which is with order of O(|T |/(m 1 m 2 )). On the other hand, the proposed frequency domain bootstrap methods need to simulate 2|N | random data for RB and WB and 4|N | random data for LB to generate one bootstrap spatial resample, which are of order O(|T |). Hence, the computational cost for generating resamples in the classical block bootstrap method is smaller compared to that of the proposed methods as the block size m 1 × m 2 diverges. However, in practice the computational complexity for evaluating the test statistics under consideration is O(|T |) and hence the computational complexity for conducting bootstrap inference, for example constructing bootstrap confidence intervals, is O(B|T |), where B is the number of bootstrap replications. Thus, the computational cost of both methods are essentially of the same order O(B|T |).
Main results
In this section, we first review the invariance principles of the partial sum process of a random field. Then, we present the main results of the paper: the invariance principles of the partial sum process of the bootstrap sample (Theorem 4.3), and the validity of the bootstrap methods (Corollaries 4.4 and 4.5).
Invariance principles for random fields
To facilitate applications to different situations, we consider the following collection of Borel subsets of [0, 1] 2 as the index set of the partial sum process: The class Q 2 is a classical example of Vapnik-Chervonenkis-classes (V C-classes), with V C-index equal to 5; see Section 2.6 of [48].
We equip the class Q 2 with the pseudo-metric ρ( is a linear mapping that translates and rescales [0, 1] 2 to E.
The following lemmas by [9] give the invariance principles of the partial sum processes
Invariance principles for bootstrap samples
This section contains the main result: the invariance principles for the partial sum process of the bootstrap sample. This result implies the validity of the bootstrap methods in case the invariance principles are involved in the asymptotic of the underlying test statistics. To facilitate further extensions to bootstrap schemes other than RB, WB and LB, the results are formulated in a general way under some meta-assumptions on the resampling scheme. In Section 5, we verify the meta-assumptions for the RB, WB and LB schemes.
Assumptions on the bootstrapped Fourier coefficients
Denote L * , E * , Var * , Cov * , and P * as the bootstrap distribution, expectation, variance, covariance and probability, conditional on the data, respectively. Moreover, let {·|V (·)} denote conditioning on the data.
Assumption B.2. Uniform convergence of the variances of the bootstrapped Fourier coefficients: Assumption B.3. There exists some p > 8 such that the p-th moments of the bootstrapped Fourier coefficients are uniformly bounded: The Mallows distance on the space of all real Borel probability measures with finite variance is given by where the infimum is taking over all random variables X 1 and X 2 with marginal distributions L 1 and L 2 , respectively. Convergence in Mallows distance implies convergence in distribution and convergence in the second moment, see [31].
Assumption B.4. The probability distributions of the bootstrapped Fourier coefficients converge uniformly in the Mallows distance to the same limit as the Fourier coefficients do, i.e.,
Asymptotic results on bootstrap samples
The following lemma asserts that the bootstrap sample {Z * (·)} and the corresponding partial sum process have correct auto-covariance structures.
(a) For any
(c) If Assumptions A.2 and B.2 also hold, then for any fixed l 1 , l 2 ∈ T , For any rectangular region T , denote the Q 2 -indexed partial sum processes of the bootstrap sample {S * The following theorem establishes the invariance principles for the Q 2 -indexed partial sum processes of the bootstrap samples.
Theorem 4.3. Suppose that Assumptions
The following corollary states that the bootstrap samples preserve the secondorder dependence structure of the random field asymptotically. In particular, if the underlying random field is Gaussian, then the proposed bootstrap procedure produces asymptotically valid approximation of the centered random field
Bootstrapping the mean
The bootstrap sample {Z * (t) : t ∈ T } obtained from Section 3.2 is real-valued and centered. In order to obtain a non-centered bootstrap process, we may employ a separate bootstrap procedure independent of {Z * (·)} to acquire a bootstrapped mean μ * T . For details on bootstrapping the mean μ * T , see [16], [42], [46], [26], and [35]. Then, the non-centered bootstrap process V * (·) = Z * (·)+μ * T gives a bootstrap approximation of V (·). Note that {Z * (·)} contains information about the covariance structure of the spatial process and μ * T contains information about the mean level. The following corollary shows that the non-centered bootstrap sample V * (·) has the same asymptotic behavior as the original spatial process in terms of the partial sum process.
where Φ(·) denotes the standard normal distribution function. Then, it holds in probability that Possible generalizations of asymptotic results in Section 4 to other V C-classes with V C-index equal to V as index sets can be established with p > 2(V − 1) moment conditions, see Theorem 2(i) of [9] for the invariance principles indexed by V C-classes. For example, the class is a classical example of V C-classes, with V C-index equal to 3, and the above asymptotic results hold when the moment condition with p > 4 in Assumptions A.4(p) and B.3 hold.
Remark 4.1. From Theorem 4.3 and Corollary 4.5, the proposed frequency domain bootstrap method can mimic the second-order dependence structure of the random field asymptotically. Hence, the proposed method is applicable to statistics of interest which depend asymptotically on second order dependence structure. In Section 6, we discuss applications of the proposed bootstrap method to confidence intervals construction of correlograms for linear random fields, testing for signal presence in random fields, and spatial isotropy test for Gaussian random fields. The validities of the proposed bootstrap method for the above applications are also theoretically investigated.
Validity of meta-assumptions for the resampling schemes RB, WB, and LB
In this section, we prove the validity of the bootstrap schemes RB, WB, and LB under some conditions on the spatial processes. We also give conditions under which the bootstrap schemes remain valid when the bootstrap methods are applied to an estimated field { V (t) : t ∈ T } rather than the observed field {V (t) : t ∈ T }. t ∈ Z 2 }, i.e., V (j) = s∈Z 2 a s ε j−s with |a j | ≤ Cρ j for some ρ ∈ (0, 1) and C > 0, satisfies Assumptions A.4(p) with p > 8 and A.5 with E |V (0)| 16 Example 5.2 (Volterra Fields). Volterra Fields is a class of nonlinear random fields which plays an important role in the nonlinear system theory. Let {ε t } t∈Z 2 be an i.i.d. random field with E(|ε 0 | p ) < ∞ for some p ≥ 32. Consider the second order Volterra process V (t) : t ∈ Z 2 , where {a s1,s2 } are real coefficients with a s1,s2 = 0 if s 1 = s 2 . Then, by Rosenthal inequality, there exists a constant C p > 0 such that where A j = s1,s2∈Z 2 (a 2 s1,j + a 2 j,s2 ) and B j = s1,s2∈Z 2 (|a s1,j | p + |a j,s2 | p ). Thus, if a s1,s2 = O(ρ max{ s1 , s2 } ) for some ρ ∈ (0, 1), then δ j,p = O(ρ j ), and Assumptions A.4(p) with p > 8 and A.5 with E |V (0)| 16 < ∞ hold.
In many applications, the bootstrap methods are not applied directly to stationary spatial data {V (t)}, but to an estimate { V (t)} from spatial data {Y (t)}, see, for example, the testing for signal presence using scan statistics in Section 6.2. The following corollary gives conditions for the validity of the bootstrap schemes in this situation.
Remark 5.1. For random fields exhibiting complex non-linear trends, the proposed bootstrap procedure requires some modifications. Specifically, assume that the underlying random field {Y (t)} can be modeled by Y (t) = μ(t) + V (t) for t ∈ T , where μ(t) is a non-linear trend, and V (t) is a zero-mean random field which satisfies the conditions stated in the Section 2. From Corollary 5.2, the proposed bootstrap method remains valid for the field { V (t)} estimated from the spatial data {Y (t)} under some conditions on the decay rate α T of the average squared error. Hence, to apply the proposed bootstrap method, we can proceed as follows. First, we apply local smoothing or kernel methods to estimate the trendμ(t), and then an estimated field { V (t)} can be obtained by The proposed bootstrap method can be applied on { V (t)} to get a centered bootstrapped sample {Z * (t)}, and then employ a separate bootstrap procedure independent of {Z * (t)} to acquire a bootstrapped mean μ * T , and get a non-centered bootstrap field V * (t) = Z * (t) + μ * T as illustrated in Section 4.2.3. Finally, a bootstrapped sample Y * (t) =μ(t) + V * (t) can be obtained.
Remark 5.2.
To implement the proposed frequency domain bootstrap methods, we need to specify one tuning parameter, the bandwidth h T = (h T 1 , h T 2 ) ∈ R 2 . From Theorem 5.1, the bandwidths have to satisfy some decay rate conditions. To be precise, we require |h T | = O(|T | −η ) for some 0 < η < 1/2 for RB and WB, and |h T | → 0 and (|h T | 4 |T |) −1 → 0 for LB. For example, |h T | = O(|T | −1/5 ) works for all three methods. To provide a more precise guideline for the choice of h T , in Section 6.1, we first conduct a sensitivity analysis for a wide range of bandwidths, and then select the one with the best coverage of the confidence intervals. As the bandwidths in RB and WB are for the kernel spectral density estimation, we can also employ the adaptive bandwidth selection proposed in [39] or [19]. Although no theoretically supported optimal bandwidth selection method is available for LB, the bandwidth obtained from the adaptive bandwidth selections for RB and WB can be shown to satisfy the required conditions for LB asymptotically. In Section 6, we apply the same range of bandwidths in RB, WB and LB, and similar results occured in all three methods. It indicates that the same suitable bandwidths for RB and WB may also be appropriate for LB. For more discussions in bandwidth selection in local bootstrap in time series context, see [37].
Applications and simulation studies
In this section, we demonstrate applications of the proposed bootstrap procedures to constructing confidence intervals for correlograms for linear random fields, testing for signal presense using scan statistics, and testing for spatial isotropy of Gaussian random fields. We also perform numerical studies to compare the proposed bootstrap methods with existing methods including the spatial block bootstrap and spatial subsampling methods. Unless specified otherwise, in all of the simulation experiments, we consider random fields {V (·)} on a 50 × 50 region T , i.e., d 1 = 50, d 2 = 50 and |T | = 2500. In addition, the number of bootstrap samples is set as 1000. Moreover, the Gaussian kernel K(λ) = φ(λ) is employed for the kernel spectral density estimation in RB and WB, and the smoothing function in LB, where φ(·) is the bivariate standard normal density function. For the spatial block bootstrap, we employ the overlapping block bootstrap ( [26]). For the spatial subsampling method, we use the overlapping subblocks subsampling ( [15]) with the suggested subblock size d
Confidence intervals construction of correlograms for linear random fields
One major application of frequency domain bootstrap in linear time series is on ratio statistics such as sample autocorrelation functions; see [5] and [21]. Analogously, for spatial statistics, the proposed frequency domain bootstrap method can be applied to construct confidence intervals of the spatial correlograms for linear random fields. Consider a random field V (t) : t ∈ Z 2 with covariance function C(h). The correlogram at lag t is defined as the ratio statistic ρ(t) = C(t)/C(0). A natural estimator of the correlogram is given bŷ s∈T V (s) and T (t) = {s : s, s + t ∈ T }, is the method-ofmoments estimator of the covariogram at lag t. To apply the proposed bootstrap methods, generate B bootstrap samples from either RB, WB or LB. For the i-th bootstrap sample, we compute the correlogram estimatorρ * (i) (t). Then, confidence intervals can be constructed from the sample quantiles of the correlogram estimates of the resamples.
To illustrate the construction of bootstrap confidence intervals for correlograms, consider real-valued mean-zero Gaussian random fields on Z 2 with a Gaussian covariance function where σ 2 is the partial sill parameter, φ is the range parameter and η is the nugget effect. First, we consider the model (η, σ 2 , φ)=(0, 1, 1) to investigate the choice of the bandwidth parameter h T for RB, WB and LB schemes, and the choice of block size for the Block Bootstrap (BB). For each resample of data, we compute the correlogram estimatesρ(t) at a range of lags: t = (1, 0), (0, 1), (1, 1), (2, 0) and (0, 2). Then, for each lag, a 95% confidence interval is constructed from the sample quantiles of the correlogram estimates of the resamples. The above procedure is repeated 1000 times to investigate the coverage accuracy of the confidence intervals. The results are summarized in Figure 1. It can be seen that the bandwidths h T = (0.05, 0.05), (0.11, 0.11), or (0.15, 0.15) give good performance for all of the proposed RB, WB, and LB schemes. For BB, block sizes 4 × 4, 7 × 7, and 13 × 13 are recommended.
Next, we consider the models (η, σ 2 , φ)=(0, 1, 0.5), (0, 1, 1), (1, 1, 0.5), and (1, 1, 1) to explore the effect of various decay rates of spatial dependency, and the presence of the nugget effect. For each model, a 95% confidence interval is constructed for each lag based on the sample quantiles of the bootstrapped correlogram estimates. Again, 1000 replications are performed to investigate the coverage accuracy of the confidence intervals. The results are summarized in Tables 3 and 4. It can be seen that the coverage accuracy of the proposed bootstrap methods is much closer to the nominal level of 95% than that of the block bootstrap method. On the other hand, the coverage accuracy of the block bootstrap method is not stable in the sense that it ranges from 50% to nearly 100% coverage under various models.
Testing for signal presence in random fields
In this subsection, we consider the problem of detecting a deterministic signal against a noisy background. This problem has received considerable attention and has profound applications in epidemiology, astronomy, and biosurveillance. The standard statistical tool to address this problem is the spatial scan statistic; see [24], [13], [14], [12], and [4]. Consider the observations {Y (t) : t ∈ T } given by where T ∈ Z 2 is a rectangular region, {V (t) : t ∈ T } is a zero-mean process, and I T ⊂ T is the location of a deterministic signal with magnitude s. We assume that I T is sufficiently large in the sense that where C I T ⊂ [0, 1] 2 is a circle with radius r I T > 0, and g T is the linear mapping defined in Section 4.1. Let Z T = {g T (A) : A ∈ Q 2 } be the collection of all possible rectangular subsets of T , and μ A = 1 |A| t∈A E (Y (t)). To determine whether a signal exists, we consider the hypotheses where μ 0 ∈ R. To test between H 0 and H 1 , the scan statistics , is a natural candidate; see Theorem 1 of [4] in time series context, and [23] and [49] in spatial context. In particular, H 0 should be rejected for a large SS T .
To determine the critical value of the test statistics SS T , the bootstrap methods can be applied to the locally demeaned spatial data The following theorem asserts that, by using the locally demeaned data, the frequency domain bootstrap methods asymptotically yield the null distribution even under the presence of a signal.
where P 0 is the probability measure under H 0 with s = 0, and P * is the conditional probability measure given {Y (t) : t ∈ T } using any bootstrap method satisfying Assumptions B.1 to B.4.
The following theorem proves the consistency of the bootstrap test.
where P 1,s is the probability measure under H 1 with signal magnitude s, and c * is the critical value determined by any bootstrap method satisfying Assumptions B.1 to B.4, i.e., P * (SS * T ≥ c * ) = α with significance level α. The following simulation experiments evaluate the finite sample performance of the bootstrap methods on testing for the presence of spatial signal. We generate real-valued zero-mean non-Gaussian random fields V (·) using point-wise transformation of homogeneous Gaussian random fields. First, we generate realvalued mean-zero Gaussian random fields X(·) on a 50 × 50 region T using the 1, 1, 1). Then, for each t ∈ T we transform X(t) to a non-Gaussian , where F N is the cumulative distribution function of the standard normal distribution and F −1 R is the inverse distribution function of a centered distribution R. In our simulation, Student's t (20) distribution is used. The signal location I T is taken as a 8 × 8 square grid at the center of T . Let Z T = Q 2 be the collection of all possible rectangular regions of T . For simplicity, let Z T contain all rectangular regions A ⊂ T with a fixed size of 10 × 10.
We investigate the size and power of the bootstrap test for signal detection under different magnitudes: s = −3, −2, −1, 0, 1, 2, and 3. The window width w T = 10 is used to compute the locally demeaned data. We compute the scan statistic for each bootstrap sample and compute the critical values from the quantiles of the bootstrapped scan statistics. Table 5 reports the rejection rate of the test using block bootstrap, RB, WB, and LB under various values of s. Different block sizes 4 × 4, 7 × 7, and 13 × 13 are used for block bootstrap, and a wide range of bandwidths, h T = (0.05, 0.05), (0.1, 0.1), (0.15, 0.15), (0.2, 0.2), and (0.25, 0.25) are used for RB, WB, and LB for evaluating the effect of bandwidth selections on the performance. It can be seen that the performance of RB, WB, and LB is superior to that of the block bootstrap, and robust to the choice of the bandwidth h T . One possible reason for the good performance of frequency domain bootstrap methods is that the frequency domain bootstrap samples have a constant mean of zero; see Lemma 4.2. On the other hand, even though the locally demeaned data are used, the block bootstrap samples may still occasionally contain regions which largely deviate from zero, which affects the performance of the test. From Table 5, the effect of this phenomenon magnifies with an increase in block size.
Spatial isotropy test for Gaussian random fields
In this subsection, we study the application of the proposed bootstrap methods to test for spatial isotropy of Gaussian random fields, i.e., the covariance between two sites depends on their distance but not direction. Since the asymptotic distributions of spatial covariances and variograms depend on the fourth order structure, Gaussian assumption is needed for the random fields in this application. Since it is difficult to exhaust all possible distances and directions, [15] considered the null hypothesis of isotropy as H 0 : 2γ(t i ) = 2γ(t j ), ∀t i , t j ∈ Λ, t i = t j , and t i = t j , where t = √ t t, Λ = {t 1 , . . . , t m } is a prespecified set of sites, and 2γ(t) = E(V (0) − V (t)) 2 is the variogram at lag t. Let G = (2γ(t 1 ), . . . , 2γ(t m )) be a vector of variograms in Λ. Observe that, under H 0 , there exists a full row rank matrix A such that AG = 0. For example, if Λ = {(1, 0), (0, 1)}, then G = (2γ(1, 0), 2γ(0, 1)) , and we may set A = [1 − 1]. Based on this observation, [15] derive the test statistic whereĜ T = (2γ(t 1 ), . . . , 2γ(t m )) is the sample variogram vector that estimates G, is the estimator of the variogram at lag t, T (t) = {s : s, s + t ∈ T }, andΣ R is a consistent estimator of Σ R , the covariance matrix of the sample variogramŝ G T . Under H 0 and some regularity conditions, Theorem 1 of [15] states that where d is the row rank of A. However, the convergence of the test statistic appears to be slow. Therefore, [15] consider a subsampling method to determine the p-value of the test. In the following, we consider using the proposed frequency domain bootstrap method to determine the p-value of the test.
Following the simulation study in [15], we employ a mean-zero Gaussian random field on Z 2 with a spherical covariance function otherwise , (6.5) where σ 2 is the partial sill parameter, φ is the range parameter, η is the nugget effect, and r = √ h Bh is related to a geometric anisotropy transformation. Specifically, given an anisotropy angle ψ A and anisotropy ratio ψ R , define the rotation matrix R and shrinking matrix T as then B = R T T R is a 2 × 2 positive definite matrix representing a geometric anisotropy transformation. A random field with spherical covariance function (6.5) is in general anisotropy except that it is isotropy when ψ R = 1. In addition, if ψ A = 0, then the main anisotropic axes are aligned with the (x, y) axes. See, for example, Section 5.1 of [47] for details. For the Gaussian process, it can be shown that the covariance function (6.5) satisfies the absolute integrability condition, which is sufficient for Theorem 1 of [15] to hold. We consider model parameters (η, σ 2 , φ, ψ A , ψ R ) = (2, 3, 4, 0, ψ R ) for different anisotropy ratio ψ R . Also, set Λ = {(1, 0), (0, 1)}, G = (2γ(1, 0), 2γ(0, 1)) , 2γ(1, 0), 2γ(0, 1)) . Thus, the test statistics (6.3) becomes whereΣ R may be estimated by subsampling or by the proposed bootstrap methods. However, since (AΣ R A ) −1 is only a normalizing factor in (6.6), we may focus on subsampling and bootstrapping the test statistic Next, we briefly outline the subsampling and bootstrap methods. For the spatial subsampling, the region T is divided into k T small overlapping subblocks, known as subsampling windows, which are congruent to T in both configuration and orientation. Denote T i sub , as the i-th subblock. For each of the k T subblocks, compute the statistic whereγ i sub (t) is defined similarly as in (6.4), but with T (t) replaced by T i sub (t) = {s : s, s+t ∈ T i sub }. Using the T S i T,sub s, the p-value for the test can be calculated by
and the null hypothesis is rejected if the p-value is smaller than the significance level α.
For the proposed bootstrap methods, B bootstrap samples are generated from either RB, WB or LB. For the i-th bootstrap sample, we compute the variogramγ i boot (1, 0) using (6.4). Next, define the variogram difference V D i = 2γ i boot (1, 0) − 2γ i boot (0, 1), and the bootstrapped test statistic Note that centering of V D i s is required since V D i has a non-zero mean under the alternative. Similar to the test for signal presence, this centering procedure allows the bootstrapped test statistic to converge to the null distribution even under the alternative hypothesis; see Theorem 6.3 below. Finally, the p-value of the test can be calculated by T ,boot ≥T S T } /B, and the null hypothesis is rejected if the p-value is smaller than the significance level α.
The following theorem states that the bootstrapped test statistic converges to the same limit as that of T S T , and hence the proposed bootstrap method is valid.
Theorem 6.3. For a stationary Gaussian random field {V (t) : t ∈ T }, under the assumptions of Theorem 1 in [15], we have
where P 0 denotes the probability measure under H 0 , and P * denotes the conditional probability given {V (t) : t ∈ T } using any bootstrap method satisfying Assumptions B.1 to B.4. Table 6 summarizes the rejection rates of the bootstrap test by spatial subsampling, RB, WB and LB under different values of anisotropy ratio ψ R . It can be seen that the performance of the proposed methods is superior to that of spatial subsampling in both size and power.
Conclusion
This paper develops a frequency domain bootstrap method for random fields on Z 2 . Three bootstrap schemes for resampling the Fourier coefficients are proposed. Inverse-transformations are then applied to obtain resamples in the spatial domain. The resulting bootstrap resamples capture the correct second-order moment structure for a large class of random fields. Moreover, invariance principles of the partial sum process indexed by a classical example of Vapnik-Chervonenkis classes of Borel subsets of [0, 1] 2 are established. The results can be easily generalized to other Vapnik-Chervonenkis classes. The frequency domain bootstrap method is simple to apply and is demonstrated to be effective in various applications including constructing confidence intervals of correlograms for linear random fields, testing for signal presence using scan statistics, and testing for spatial isotropy in Gaussian random fields. Simulation studies are conducted to illustrate the finite sample performance of the proposed method and to compare with the existing spatial block bootstrap and subsampling methods. For small or moderate sample sizes, since the effective number of blocks for block bootstrap and subsampling methods would be small when the block size is chosen to be large, severe bias would be induced, and the finite sample performances are sensitive to the selection of block size. However, this problem cannot be resolved by choosing a smaller block size as the dependency structure of the underlying spatial fields cannot be preserved when the block size is too small. On the other hand, although the bandwidth selection may also affect the performance of the proposed frequency domain bootstrap for small or moderate sample sizes, its effect is relatively small compared to that of block bootstrap as shown in the simulation studies in Section 6. However, as shown in Section 4, the proposed frequency domain bootstrap can only mimic the second order moment structure of the underlying spatial fields, and hence may not be appropriate for general statistics which involve higher moment structures, or may require some form of transformations of the data. On the other hand, in general the block bootstrap methods can be applied directly without any transformations beforehand.
Frequency domain bootstrap for random fields
Using Lemma A.1, we can handle the above sum by decomposing the inmost sum into the 5 terms. For the first term, by the absolute summability of γ(·), we have Also, by the absolute summability of γ(·), we have For the sum of the last three terms of Lemma A.1, we have Similar arguments can be applied on the remaining two terms. Finally, the sum of the second term is Putting everything together, we obtain (b). The proof of (c) is analogous. A simple calculation shows that Cov(Z(l 1 ), Z(l 2 )) = Cov(V (l 1 ), V (l 2 )) + o(1) by the absolute summability of the auto-covariance function.
Proof of Theorem 4.3. To prove the invariance principle of the Q 2 -indexed partial sum process of the bootstrap sample, we have to show the finite-dimensional convergence and also the tightness of the partial sum process. The following lemma shows the convergence of the finite-dimensions distribution of the Q 2indexed partial sum process of the bootstrap sample. Its proof is deferred to Appendix B. A.2, B.1 and B.4 are fulfilled, for B 1 , B 2
(b) If Assumptions
The following lemma gives the critical step towards tightness of the Q 2indexed partial sum process of the bootstrap sample. Its proof is deferred to Appendix B. A.2, B.1 to B.3, for any > 0 and A T ∈ Q 2 , we have
Lemma A.3. Under Assumptions
Theorem 13.5 of [1] gives a characterization of weak convergence via convergence of the finite-dimensions distributions as well as tightness. Lemmas A.2 and A.3 show that these conditions are fulfilled. It completes the proof of Theorem 4.3.
Proof of Corollary 4.4. The proof is analogous to the proof of Lemma A.2(a) and is thus omitted.
Proof of Corollary 4.5. The corollary is an immediate consequence of Theorem 4.3, thus the proof is omitted.
A.2. Proofs of Section 5
Proof of Theorem 5.1. In the following we only prove the assertions for x * (·), while the assertions for y * (·) follow because x * (j) d = y * (j) (conditionally given V (·)). Since Assumption B.1 directly follows from the definition of the bootstrap schemes, we will show that Assumptions B.2 to B.4 are also valid under the assumptions stated in the theorem.
Also, by Theorem 4.3 in [33], we also have the following four conditions on the sums of the periodograms and Fourier coefficients: for some constant C 1 , C 2 ≥ 0 and q = 4 + with some ∈ (0, 1). Next, since we have for k = 1, 2, Also, we have for k = 1, 2, Hence, Assumption B.3 holds with p = 2q > 8 since 2 , j ∈ N and k = 1, 2. Since s * j,k are from the standardized residues, we have E * (s * 1,k ) 2 = 1, From this, it follows that The last line follows from the uniform convergence of the empirical distribution function of the Fourier coefficients. Note that convergence in Mallows distance is equivalent to convergence in distribution in addition to convergence of the first two moments. In this case, the convergences in all three cases hold uniformly in j.
Proof of Corollary 5.2. It follows directly from Theorem 6.1 in [33] and the above proof of Theorem 5.1.
A.3. Proofs of Section 6
Proof of Theorem 6.1. By the conditions stated in the theorem, it is easy to see that { V (t)} satisfies the condition in Corollary 5.2. Since Z T ⊂ Q 2 , the results follows from Theorem 4.3(a).
Proof of Theorem 6.2. It is easy to see that the scan statistics diverges to infinity under the conditions stated in the theorem.
For the first term J 1 , For the second term J 2 , since the underlying field is Gaussian, the Fourier coefficients x(i) and y(i) are Gaussian. By the Gaussianity of x(i) and y(i), asymptotic independence and strong law of large numbers, we have K * | 11,393.2 | 2021-01-01T00:00:00.000 | [
"Mathematics"
] |
Optical and Electrical Properties of Magnetron Sputtering Deposited Cu – Al – O Thin Films
We have successfully prepared Cu–Al–O films on silicon (100) and quartz substrates with copper and aluminum composite target by using radio frequency (RF) magnetron sputtering method. We have related the structural and optical-electrical properties of the films to the sputtering area ratio of Cu/Al for the target (rCu/Al). The deposition rate of the film and rCu/Al can be fitted by an exponential function. rCu/Al plays a critical role in the final phase constitution and the preferred growth orientation of the CuAlO2 phase, thus affecting the film surface morphology significantly. The film with main phase of CuAlO2 has been obtained with rCu/Al of 45%. The films show p-type conductivity. With the increase of rCu/Al, the electrical resistivity decreases first and afterwards increases again. With rCu/Al of 45%, the optimum electrical resistivity of 80Ω·cm is obtained, with the optical transmittance being 72%–79% in the visible region (400–760 nm). The corresponding direct band gap and indirect band gap are estimated to be 3.6 eV and 1.7 eV, respectively.
Introduction
Transparent conducting oxide (TCO) films have been widely used in the fields of flat panel displays, solar cells, touch panels, and other optoelectronic devices owing to their high electrical conductivity and optical transmittance in visible region [1][2][3].Up to now, however, most of the TCOs obtained are characterized by n-type conductivity.The lack of p-type TCOs restricts the development of p-n junction based device.Therefore, developing stable p-type TCOs becomes the hot research topic [4,5].Kawazoe et al. [6] investigated delafossite-structured CuAlO 2 and successfully prepared CuAlO 2 films using pulse laser deposition (PLD) method in 1997.The obtained films are good p-type TCO materials, with room temperature electrical conductivity being 0.095 Scm −1 , optical transmittance being 80% and direct band gap being 3.5 eV.Alternatively, Gao et al. [7] had fabricated p-type transparent CuAlO 2 thin films by spinon technique and reported the film had a conductivity of 2.4 Scm −1 with the optical band gap being 3.75 eV.Due to the excellent optical-electrical properties, CuAlO 2 film is attracts increasing research interest for the potential applications ranging from p-n junction to invisible circuits.
So far, various deposition techniques have been employed to fabricate highly transparent conductive CuAlO 2 thin films, including chemical vapor deposition (CVD) [8], pulsed laser deposition (PLD) [9,10], sol-gel [11], and sputtering [12,13].Among these techniques, RF magnetron sputtering takes the advantage of strong adhesion between film and substrate, large area deposition, low substrate temperature, and good compatibility with current microelectronics.However, various deposition parameters such as oxygen partial pressure, the variety of sputtering target, and sputtering power may influence the International Journal of Antennas and Propagation properties of the films.Furthermore, most of the CuAlO 2 films deposited by RF sputtering method are always using high-cost CuAlO 2 ceramic target.In this work, we simplify the preparation process by using the low-cost copper and aluminum composite target instead of CuAlO 2 ceramic target.We investigate the influence of sputtering area ratio of Cu/Al for the target (r Cu/Al ) on the properties of obtained films.We elucidate the underlying mechanisms between the film structure and the optical band gap.
Experimental
Cu-Al-O films were deposited on silicon (100) and quartz substrates, respectively, by RF magnetron sputtering method at room temperature (∼22 • C).High purity of 99.999% copper and aluminum composite target was used as the sputtering materials.The composition of the film was controlled by changing the sputtering area ratio r Cu/Al of Cu and Al for the target.Pure argon and oxygen were used as sputtering gas and reactive gas, respectively.The substrates were cleaned ultrasonically in 5% (volume content) HF, acetone and ethanol for silicon, in acetone, and ethanol for quartz before being loaded into the chamber.The HF solution was stored in closed plastic container, and it was used following the safety rules [14,15], such as wearing special respirator and gloves to prevent the HF from contacting our skin.Before deposition, base pressure of the chamber was evacuated to 4 × 10 −4 Pa by rotary and molecular pump.During deposition process, the working pressure was maintained at 0.3 Pa and sputtering power was fixed at 80 W. We varied the r Cu/Al over the range 20%-55% to ensure the composition changed from Al being excessive to Cu being excessive.The thickness of the films was controlled being 300 ± 10 nm via the deposition duration time.Before characterized the properties, the samples were annealed in GSL-1400X tubular furnace with argon ambience for 3 h.
The thickness of the films was measured by UVISEL ER wide spectral range Ellipse leaning meter.The structural character was identified by using X'Pert Pro MPD X-ray diffractometer with Cu Kα (λ = 0.15406 nm) radiation.The surface morphology and chemical compositions were characterized by ZEISS-SUPRA-55 scanning electron microscope (SEM) and OXFORD INCA PentaFET×3 energy dispersive spectrometer (EDS).An X-ray photoelectron spectroscopy (XPS) apparatus (PHI-5400) was employed to determine the chemical valence of the elements.The conductivity type was identified by HMS-7077 measurement system.Room temperature resistivity of the films was investigated by the four-probe method in Agilent 4155c measurement system.UV-3150 spectrophotometer was used to measure the optical transmittance of the films.
Results and Discussion
The deposition rate R D is one of the most important parameters of the deposition process, which plays an important role in the structure and the properties of the films.It can be obtained through dividing the thickness by deposition time.
Figure 1 illustrates the effect of r Cu/Al on the deposition rate R D of the Cu-Al-O films on Si (100) substrate.R D increases from 1.13 nm•min −1 to 1.46 nm•min −1 with the increase of r Cu/Al from 20% to 55%.The results can be fitted by an exponential function as The deposition rate R D increases with the increase of r Cu/Al is mainly due to that the sputtering yield of Cu is higher than that of Al.In addition, the sputtered Cu atom possesses more energy than Al, thus favoring the formation of defect and nucleation center on the substrate.This also contributes to the increase of R D .
Figure 2 plots the X-ray diffraction spectra of Cu-Al-O films deposited with different r Cu/Al .When r Cu/Al is 20%, the diffraction peaks corresponding to CuAlO 2 (104), ( 015), (009), (116) and Al 2 O 3 (113), (306) are observed, indicating an excess of Al element exists in the film.When r Cu/Al increases to 45%, the CuAlO 2 (018) peak grows remarkably while Al 2 O 3 peaks tend to reduce.CuAlO 2 becomes the main phase of the film.The change may be due to the following reaction [16]: When the r Cu/Al reaches 55%, a new peak at 36.4 • which is identified to Cu 2 O (111) emerges, suggesting the surplus of Cu element in the film.
The r Cu/Al also plays an important role in the preferred growth orientation of CuAlO 2 diffraction peaks.As seen from Figure 2, with r Cu/Al of 20%, CuAlO 2 phase shows strong peak along (104) and (015) crystal planes, while the X-ray diffraction peak of (018) is weak.With r Cu/Al increases to 45%, the peak of CuAlO 2 (018) increases significantly and becomes the strongest, suggesting that the preferred growth orientation of CuAlO 2 is (018) with this r Cu/Al .When the r Cu/Al is 55%, (018) peak of CuAlO 2 weakens and the preferential growth changes into (104).Although the surface energy of (001) crystal plane might be the lowest in delfossite structure CuAlO 2 crystal, the kinetic parameters, for instance, annealing treatment, may also play a role in the selection of the preferred growth orientation.
The grain size can be estimated from the full-width halfmaximum intensity of XRD peak by using Scherrer's relation [17]: where k is a constant of 0.89 for Cu target, λ = 0.15406 nm, θ and β are the Bragg diffraction angle and half intensity width.The calculated grain sizes of the films are estimated to be 12.6 nm, 14.1 nm, 17.4 nm, and 15.2 nm for r Cu/Al of 20%, 30%, 45%, and 55%, respectively.Figure 3 displays the typical SEM images and the corresponding EDS spectra of the films deposited with different r Cu/Al on Si (100) substrate.With r Cu/Al being 20%, a large amount of globular precipitation phases have been observed, as shown in Figure 3(a).Figure 3 EDS spectrum of the globular phases, showing the atomic ratio of Al:O is around 2 : 3.This suggests that the globular phase is Al 2 O 3 .Figure 3(c) demonstrates the image of the film deposited with r Cu/Al being 45%.The film shows a uniform microstructure with well-defined grain boundaries, no impurity is observed.EDS spectrum of the film signifies the atomic ratio of Cu : Al : O is about 1 : 1 : 2, confirming the XRD analysis that CuAlO 2 is the main phase of the film.When the r Cu/Al increases to 55%, a nonfaceted phase is observed.EDS analysis of this phase shows that the atomic ratio of Cu : Al : O is about 12 : 1 : 5, indicating the precipitation phase is mainly composed of copper oxide, which is consistent with the XRD result.
To further identify the chemical compositions and valences of the elements, we performed XPS analysis to the films deposited on Si (100) substrate.Figures 4(a)-4(c) show the typical XPS spectra of the Cu-Al-O film obtained with r Cu/Al = 45% after the calibration using C 1s position of carbon.As shown in Figure 4(a), the "shake-up" peak of the Cu 2+ 2p 3/2 at around 943 eV is not observed, indicating that no Cu 2+ presents in the film.Figure 4(b) shows the Cu 2p 3/2 peak together with the two separated peaks by using the multipeaks fitting.The peak at the low binding energy of 931.7 eV is corresponding to Cu + in CuAlO 2 , while the high binding energy 932.8 eV is corresponding to Cu 2 O.The intensity of the low-energy peak (931.7 eV) is remarkably higher than that of the high-energy peak (932.8 eV), suggesting Cu + mainly exists in CuAlO 2 phase.The Al 2p peak region, shown in Figure 4(c), consists of Al 2p peak of Al 3+ (around 74.2 eV), Cu 3p 3/2 (around 75.3 eV), and Cu 3p 1/2 (77.1 eV) peaks of Cu + , which is similar to the result reported by Cai et al. [16].
The Cu 2p spectra of the other films are similar to that shown in Figure 4(a) where no Cu 2+ peaks have been observed.This is consistent with the XRD results: no CuO or CuAl 2 O 4 diffraction peak is observed in the XRD patterns.The conductivity type of the films deposited on quartz substrate was measured by Hall effect measurement and the electrical resistivity (ρ) at room temperature was studied by four-probe method.Prior to the investigation, four Au electrodes were deposited on the film surface.
Figure 5 shows the electrical resistivity (ρ) of the films formed with different r Cu/Al and the inset demonstrates the relation between current and voltage for the film deposited with r Cu/Al of 45%.From the inset I-V curve, it can be seen the linear dependence is obtained, which indicates ohmic contact has been achieved between Au electrode and the film.With r Cu/Al being 20%, the sample shows a high electrical resistivity due to the existence of large amount of insulating Al 2 O 3 in the film [18].When r Cu/Al increases from 20% to 45%, the electrical resistivity (ρ) decreases from 243 Ω•cm to 80 Ω•cm.The reason may be that the improvement of crystallization quality reduces the scattering and trapping of charge carriers, leading to the enhancement of Hall mobility.Furthermore, the increment of CuAlO 2 increases the carrier concentration of the film.With r Cu/Al being 55%, the electrical resistivity (ρ) increases to 156 Ω•cm.In this case, surplus copper element exists in the film and the copper vacancy which can produce hole carrier concentration decreases.In addition, the emergence of Cu 2 O impurity strengthens the scattering and trapping of charge carriers, decreasing the Hall mobility.
Figure 6 presents the optical transmittance spectra of the Cu-Al-O thin films deposited with different r Cu/Al on quartz substrate.As can be seen, the film deposited with r Cu/Al of 20% exhibits the highest transmittance (77%-84%) in the visible region (400-760 nm).It may be due to the large amount of Al 2 O 3 precipitation phase, which has quite high transmittance in the visible range, existing in the film.With r Cu/Al being 30%, a decrease (58%-76%) in the film transmittance was observed.When r Cu/Al increases to 45%, International Journal of Antennas and Propagation the transmittance of the film increases to 72%-79% in the visible region (400-760 nm) due to that the CuAlO 2 becomes the predominant phase.In addition, the decrease of defect density and crystallization improvement of the films also contribute to the improvement of optical transmittance.When r Cu/Al reaches 55%, the transmittance decreases again, mainly because the coexistence of Cu 2 O phase strengthen the scattering effect, lowering the optical transmittance [18].
To further investigate the optical properties, we evaluated the optical band gap (E g ) of the Cu-Al-O thin films.The optical absorption coefficient (α) of the films can be calculated using the following equation: where d is the film thickness and T is the transmittance of the film.The relation between optical absorption coefficient (α) and optical band gap (E g ) can be written as where A is the absorption edge width parameter and hν means the incident photon energy.The exponential n is 1/2 or 2 for direct allowed transition (E gd ) or indirect allowed transition (E gi ).
Figure 7 shows a typical linear fitting process of E g for the Cu-Al-O thin film deposited at r Cu/Al = 45%.E gd and E gi are obtained from the intercept on hν axis in the plots of (αhν) 2 -hν and (αhν) 1/2 -hν, respectively.Figure 8 compares the E gd and E gi values of the films deposited with different r Cu/Al .The E gd decreases from 5.3 eV to 3.6 eV with increase of r Cu/Al from 20% to 45%, afterwards, it increases to 4.7 eV with r Cu/Al reaching 55%.E gi varies in the range of 1.6-1.9eV and achieves the minimum with r Cu/Al of 45%.E gd and E gi may be influenced by the phase constitution of the films.With r Cu/Al of 20%, the film is composed of Al 2 O 3 and CuAlO 2 phases, hence, the optical band gap of the film can be evaluated by the superposition of pure Al 2 O 3 and CuAlO 2 , whose E gd are 9.0 eV [19] and 3.5 eV [6,20], respectively.The direct band gap of the film which consists of Al 2 O 3 and CuAlO 2 is assumed to be in the range of 3.5-9.0eV.This is in agreement with our result 5.3 eV.For the film deposited with r Cu/Al of 45%, the main crystal phase of the film is CuAlO 2 and the estimated E gd (3.6 eV) is close to the E gd of pure CuAlO 2 (3.5 eV) [6,20].Moreover, quantum size effect may also affect the band gap, which can be described by the following equation [20]: R is the radius of the semiconductor particle and the first term is the quantum energy of localization for both electron and hole.The second term is the Coulomb attraction and the third term represents the band gap of the bulk semiconductor.As is shown in the model, the change tendency of E g and R is reverse, that is, E g should be smaller for larger R. The estimated results show that with the largest grain size 17.4 nm (r Cu/Al = 45%), the E gd achieves the minimum value 3.6 eV, while with the minimum grain size 12.6 nm (r Cu/Al = 20%), the E gd obtains the maximum value 5.3 eV, indicating our results is consistent with the model.This suggests that quantum size effect resulted from the nano size grain structure may play a role in the optical band gap of the film.
Conclusions
Cu (b) illustrates the International Journal of Antennas and Propagation 3
Figure 1 :
Figure 1: Deposition rate R D of the films as a function of sputtering area ratio of Cu/Al for sputtering target (r Cu/Al ).
Figure 5 :Figure 6 :
Figure 5: Variation of electrical resistivity with different r Cu/Al .Inset shows the I-V relation for the sample deposited with r Cu/Al of 45%.
Figure 2: XRD patterns of Cu-Al-O thin films deposited with different r Cu/Al on Si (100) substrate. ) Figure 7: Plots of (αhν) 2 versus hν for the determination of direct band gap (E gd ) for the film deposited with r Cu/Al of 45% (inset: determination of indirect band gap E gi ).
r Cu/Al (%) Figure 8: Effect of r Cu/Al on the optical band gap of the film: (a) direct band gap E gd ; (b) indirect band gap E gi .
-Al-O thin films have been deposited on Si (100) and quartz substrates by RF magnetron sputtering technique.The sputtering area ratio of Cu/Al for the sputtering target (r Cu/Al ) plays an important role in the structure, opticalelectrical properties and optical band gaps of the films.The deposition rate R D increases with the increase of r Cu/Al mainly because of the higher sputtering yield of Cu than Al.With r Cu/Al of 20%, CuAlO 2 and Al 2 O 3 phases coexist in the film due to the surplus Al element.CuAlO 2 becomes the main phase of the film when r Cu/Al reaches 45%.Whereas when r Cu/Al increases to 55%, as well as the CuAlO 2 , Cu 2 O diffraction peak also be detected.Cu + in the films deposited with different r Cu/Al exists in the form of CuAlO 2 or Cu 2 O, no Cu 2+ has been observed.The films show stable p-type conductivity.With the increase of r Cu/Al , the electrical resistivity first decreases afterwards increases.With r Cu/Al of 45%, the film shows the optimum opticalelectrical properties.The electrical resistivity is measured to be 80Ω•cm with the transmittance being 72%-79% in the visible region(400-760 nm).The estimated E gd is in the range of 3.6-5.3eV and E gi in the range of 1.6-1.9eV which depends on r Cu/Al . | 4,253.2 | 2012-08-13T00:00:00.000 | [
"Physics",
"Materials Science"
] |
A New Methodology for Predicting Brittle Fracture of Plastically Deformable Materials: Application to a Cold Shell Nosing Process
The traditional theory of ductile fracture has limitations for predicting crack generation during a cold shell nosing process. Various damage criteria are employed to explain fracture and failure in the nose part of a cold shell. In this study, differences in microstructure among fractured materials and analysis of their surfaces indicated the occurrence of brittle fractures. The degree of “plastic deformation-induced embrittlement” (PDIE) of plastically deformable materials affects the likelihood of brittle fractures; PDIE can also decrease the strength in tension due to the Bauschinger effect. Two indicators of brittle fracture are presented, i.e., the critical value of PDIE and the allowable tensile strength (which in turn depends on the degree of PDIE or embrittlement-effective strain). When the maximum principal stress is greater than the latter and the PDIE is greater than the former, our method determines the likelihood of brittle fracture. This approach was applied to an actual cold shell nosing process, and the predictions were in good quantitative agreement with the experimental results.
Damage models have been successfully applied to predict tensile strength even after the fracture point [15,17], as well as chevron cracking [15,16,[20][21][22]. However, the predicted rate of decrease in tensile load was lower than in experimental tensile tests. Other studies reported unrealistic rates of decrease in tensile load [15,20], implying that purely damagebased approaches are inappropriate. The concordance between critical damage predictions and experimental results are path-and test dependent [1].
Despite the use of various methods, chevron cracking prediction remains a problem. Most studies failed to predict V-shaped cracks, and all studies to date failed to predict the decreasing slope with the increase in radius, especially near the surface of the extrusion. In addition, the predicted radii of chevron cracks were much smaller than experimentally determined values [22]. These observations suggest that embrittlement due to compressive plastic deformation around the extrusion surface must exert an effect. One major complicating factor is the Bauschinger effect, which may alter plastic deformation behavior to some extent. For example, it can cause almost perfectly plastic materials to behave like strain-hardening materials and vice versa [23].
It is often difficult to determine the reasons for fracture occurrence during cold forging, because the inherent behaviors of materials change with plastic deformation; only the mass remains unchanged. Brittle fracture of ductile materials during cold metal forming is particularly problematic. The degree of "plastic deformation-induced embrittlement" (PDIE) can be increased by embedded inclusions, such as nitrogen and hydrogen as well as detrimental metallic or non-metallic inclusions, in plastically deformable bodies. Kim et al. [24] investigated the effects of nitrogen on the likelihood of fracture in cold forging and reported that it was necessary to minimize nitrogen to prevent the material from cracking. Thomson [25] proposed a physical model of fracture featuring a brittle crack embedded in a plastically deformed medium and extended it to the case of hydrogen embrittlement in steel. Singh et al. [26] studied the effects of non-metallic inclusions on crack formation in forged steel components using various metallurgical techniques and showed that such inclusions can be a major source of brittle fracture in cold forging.
In cold metal forming, brittle fracture can occur during plastic deformation. Brittle fracture of initially ductile materials with moderate forgeability occurs frequently, e.g., in double cup, forward and backward extrusion. Sljapic et al. [27] studied ductile and brittle fractures occurring during cold forming of brass, and reported ductile fracture of an axisymmetric collar, while a brittle fracture was observed in hexagonal-shaped bars in response to large plastic strain. It was concluded that a single fracture criterion cannot explain these fracture cases.
The brittle facture of various materials has been studied theoretically and metallurgically. Jokl et al. [28] studied brittle fracture of a crystalline solid capable of being plastically deformed, taking into consideration the energy consumed by bond stretching and breaking, and by dislocation emission from the crack tip. There have also been studies [29,30] on the phase field theory of brittle fracture, and this remains a topic of significant research interest. The above studies focused on microscopic examination of crack propagation, and did not address increases in the PDIE of ductile materials or the sudden occurrence of brittle fracture. Watanabe et al. [31] proposed a modified Freudenthal damage model, i.e., an energy model, to understand brittle fracture occurring during cold forging of hollow shafts of the cold forgeable material, S48C, when the material had not been subjected to proper heat treatment.
Here, a practical, macroscopic approach for predicting brittle fracture during cold forging of ductile materials is presented. We assumed that the degree of PDIE is isotropic, and also that fracture occurs in a plane normal to the direction of the maximum principal stress when its value exceeds the weighted tensile strength of the material embrittled by compressive plastic deformation. Figure 1 shows the cold shell nosing process, which is a special type of cold forging process. The material is AISI 9260 (C: 0.60 wt.%, Si: 1.86 wt.%, Mn: 0.81 wt.%, P: 0.013 wt.%, S: 0.014 wt.%, Al: 0.011 wt.%, Cr: 0.12 wt.%, Mo: 0.03 wt.%, Ni: 0.08 wt.%, Fe: Bal.). The preform shown in Figure 1a is hot forged, spheroidized, machined, and lubricated for the single-stage cold shell nosing process shown in Figure 1b. A hydraulic press was employed for pilot manufacturing. Figure 2a shows the lubricated preform, and Figure 2b,c compares good-quality forging and material fracture cases. The fracture rate among all test products was~3%. The fracture patterns were almost the same as those shown in Figure 2c. The microstructure of the fractured material was characterized by a typical spheroidized pearlite, as shown in Figure 3. Visual investigation of the fracture in Figure 2d indicates no characteristics of ductile fracture, i.e., history of damage evolution. The microstructure of the fractured material was characterized by a typical spheroidized pearlite, as shown in Figure 3. Visual investigation of the fracture in Figure 2d indicates no characteristics of ductile fracture, i.e., history of damage evolution. To determine the causes of fracture, the materials were examined macroscopically and microscopically. The five tensile specimens shown in Figure 4 were fabricated from the preforms, ready to be shell-nosed and pulled by a universal testing machine. The diameter and gauge length were 6 and 30 mm, respectively. We analyzed the representative elongation-tensile load curve denoted by No. 2 in Figure 5 to determine flow stress using a material identification technique [32]. Flow stress for strain values up to 0.8 was acquired and extrapolated to larger strains, as can be seen in Figure 6.The strain up to which the flow stress was theoretically obtained was quite large. The technique can predict the flow stress at the strain of around 1.5, depending on the materials [33][34][35]. Note that the point marked "necking point" in the true stress-strain curve in Figure 6 corresponds to the actual necking point in Figure 7. To determine the causes of fracture, the materials were examined macroscopically and microscopically. The five tensile specimens shown in Figure 4 were fabricated from the preforms, ready to be shell-nosed and pulled by a universal testing machine. The diameter and gauge length were 6 and 30 mm, respectively. The microstructure of the fractured material was characterized by a typical spheroidized pearlite, as shown in Figure 3. Visual investigation of the fracture in Figure 2d indicates no characteristics of ductile fracture, i.e., history of damage evolution. To determine the causes of fracture, the materials were examined macroscopically and microscopically. The five tensile specimens shown in Figure 4 were fabricated from the preforms, ready to be shell-nosed and pulled by a universal testing machine. The diameter and gauge length were 6 and 30 mm, respectively. We analyzed the representative elongation-tensile load curve denoted by No. 2 in Figure 5 to determine flow stress using a material identification technique [32]. Flow stress for strain values up to 0.8 was acquired and extrapolated to larger strains, as can be seen in Figure 6.The strain up to which the flow stress was theoretically obtained was quite large. The technique can predict the flow stress at the strain of around 1.5, depending on the materials [33][34][35]. Note that the point marked "necking point" in the true stress-strain curve in Figure 6 corresponds to the actual necking point in Figure 7. We analyzed the representative elongation-tensile load curve denoted by No. 2 in Figure 5 to determine flow stress using a material identification technique [32]. Flow stress for strain values up to 0.8 was acquired and extrapolated to larger strains, as can be seen in Figure 6. The strain up to which the flow stress was theoretically obtained was quite large. The technique can predict the flow stress at the strain of around 1.5, depending on the materials [33][34][35]. Note that the point marked "necking point" in the true stress-strain curve in Figure 6 corresponds to the actual necking point in Figure 7. The microstructure of the fractured material was characterized by a typical spheroidized pearlite, as shown in Figure 3. Visual investigation of the fracture in Figure 2d indicates no characteristics of ductile fracture, i.e., history of damage evolution. To determine the causes of fracture, the materials were examined macroscopically and microscopically. The five tensile specimens shown in Figure 4 were fabricated from the preforms, ready to be shell-nosed and pulled by a universal testing machine. The diameter and gauge length were 6 and 30 mm, respectively. We analyzed the representative elongation-tensile load curve denoted by No. 2 in Figure 5 to determine flow stress using a material identification technique [32]. Flow stress for strain values up to 0.8 was acquired and extrapolated to larger strains, as can be seen in Figure 6.The strain up to which the flow stress was theoretically obtained was quite large. The technique can predict the flow stress at the strain of around 1.5, depending on the materials [33][34][35]. Note that the point marked "necking point" in the true stress-strain curve in Figure 6 corresponds to the actual necking point in Figure 7. Figure 6, with an emphasis on post-necking strain hardening. The comparison implies that the flow stress in Figure 6 is acceptable, especially the flow stress at large strain after the necking point; the experimental and predicted shapes of the specimen after fracture are in good agreement with each other. Notably, the true strain at the necking point, i.e., 0.13, is quite small compared with the maximum strain of 0.8 in the tensile test in the present study. Before the necking point, some discrepancies between the experimental and predicted tensile load-elongation curves can be seen in Figure 7b. However, they are very close to each other after the necking point, which can be accurately predicted regardless of the error [36]; this suggests that the almost-uniform cross-section of the specimen was maintained up to the necking point and that the effect of the error Figure 6, with an emphasis on post-necking strain hardening. The comparison implies that the flow stress in Figure 6 is acceptable, especially the flow stress at large strain after the necking point; the experimental and predicted shapes of the specimen after fracture are in good agreement with each other. Notably, the true strain at the necking point, i.e., 0.13, is quite small compared with the maximum strain of 0.8 in the tensile test in the present study. Before the necking point, some discrepancies between the experimental and predicted tensile load-elongation curves can be seen in Figure 7b. However, they are very close to each other after the necking point, which can be accurately predicted regardless of the error [36]; this suggests that the almost-uniform cross-section of the specimen was maintained up to the necking point and that the effect of the error Figure 6, with an emphasis on post-necking strain hardening. The comparison implies that the flow stress in Figure 6 is acceptable, especially the flow stress at large strain after the necking point; the experimental and predicted shapes of the specimen after fracture are in good agreement with each other. Notably, the true strain at the necking point, i.e., 0.13, is quite small compared with the maximum strain of 0.8 in the tensile test in the present study. Before the necking point, some discrepancies between the experimental and predicted tensile load-elongation curves can be seen in Figure 7b. However, they are very close to each other after the necking point, which can be accurately predicted regardless of the error [36]; this suggests that the almost-uniform cross-section of the specimen was maintained up to the necking point and that the effect of the error could therefore be neglected when predicting the critical damage arising from the tensile test because damage accumulation is markedly affected by the strain after the necking point.
Problem Description
We simulated the same tensile test to determine the critical damage values of various damage models, based on the maximum damage calculated at the fracture point, including those of Freudenthal [3], [12], Bai and Wierzbicki (BW) [13] for both unnormalized lode angles (BWUL) and normalized lode angles (BWNL), and Lou and Huh (LH) [14]. The material constants for all models are given in Table 1, along with their associated model equations and references. σ m , ε and σ i (σ 1 ≥ σ 2 ≥ σ 3 ) mean mean stress, effective strain and principal stress, respectively. Note that some of the material constants in Table 1 are not specifically for AISI 9260, because their acquisition is costly and time consuming and the goal of this study was not direct comparison of the damage models, but rather to determine whether the damage models can be used to predict fracture occurring during the cold shell nosing process, as shown in Figure 2d. Table 1. Damage models employed in this study and their material constants.
Damage Model Model Equations Material Constants Critical Damage Reference
Freudenthal [14] The predicted critical damage values, which mean maximum damage values at the necking point at the fracture instant (See Figure 7a), are summarized in Table 1 and the normalized damage values around the necking region at the fracture instant, defined as the calculated damage values divided by their corresponding critical damage values, are shown in Figure 8. Note that the critical damage values were determined by the maximum damage values at the fracture instant and that the normalized damage values at the necking point in Figure 8 are thus all unity. Figure 8 shows that all of the damage models tested predicted the same initial fracture point, i.e., the center of the necking point, even though the distribution of normalized damage can be categorized into three different groups (first group, Freudenthal, McClintock, CL, RT, BDR, NRMQ, and LH; second group, NCL, OSOS, RTCL, and BWUL; and third group, CMV and KH).
normalized damage values around the necking region at the fracture instant, defined as the calculated damage values divided by their corresponding critical damage values, are shown in Figure 8. Note that the critical damage values were determined by the maximum damage values at the fracture instant and that the normalized damage values at the necking point in Figure 8 are thus all unity. Figure 8 shows that all of the damage models tested predicted the same initial fracture point, i.e., the center of the necking point, even though the distribution of normalized damage can be categorized into three different groups (first group, Freudenthal, McClintock, CL, RT, BDR, NRMQ, and LH; second group, NCL, OSOS, RTCL, and BWUL; and third group, CMV and KH).
Checking for Ductile Fracture
We simulated the cold shell nosing process using an elastoplastic finite element method [37,38] with all the information outlined in the previous section. We assumed that the material is rate-independent and that the punch velocity was fixed at −1 mm/s. Due to the well-lubricated material surface, the coefficient of friction was assumed to be 0.05. The finite element model was purposely meshed for precise simulation of the nose part, as shown in Figure 9a, because most plastic deformation is concentrated in this region. The deformation over time with effective strain is shown in Figure 9b, indicating that the thick nose part slid down along the die wall with the reduction of its radius under compressive stress; it passed the die orifice between 2.1 and 2.7 s. The maximum effective strain of 0.52 occurred on the internal bulged surface. However, the effective strain was relatively low on the opposite side, i.e., 0.31.
The predicted forming load-time curve is shown in Figure 10. The forming load increased steadily up to the stroke when the nose part started to separate from the die while oscillating due to intermittent node detachment from the die. While the thick nose part
Checking for Ductile Fracture
We simulated the cold shell nosing process using an elastoplastic finite element method [37,38] with all the information outlined in the previous section. We assumed that the material is rate-independent and that the punch velocity was fixed at −1 mm/s. Due to the well-lubricated material surface, the coefficient of friction was assumed to be 0.05. The finite element model was purposely meshed for precise simulation of the nose part, as shown in Figure 9a, because most plastic deformation is concentrated in this region. The deformation over time with effective strain is shown in Figure 9b, indicating that the thick nose part slid down along the die wall with the reduction of its radius under compressive stress; it passed the die orifice between 2.1 and 2.7 s. The maximum effective strain of 0.52 occurred on the internal bulged surface. However, the effective strain was relatively low on the opposite side, i.e., 0.31.
passed the die orifice, the forming load increased to its local maximum, as indicated in Figure 10. From the stroke where the upper side of the material started to show plastic deformation, it increased steadily up to 214 tons. It is interesting to note that the decrease in forming load with stroke increase occurred near the instant of fracture, as shown in Figure 10, which can have direct or indirect effects on the fracture. We determined the damage values for all models and then normalized them by dividing by the critical damage. Figure 11 compares the predicted normalized damages in the nose part; the maximum values are listed in Table 2. All of the predicted normalized damage values were much lower than unity. In addition, all of the damage models predicted low damage on the surface of the fractured side, i.e., on the outer diameter. Therefore, we concluded that the actual fracture surface is unrelated to the ductile fracture. The predicted forming load-time curve is shown in Figure 10. The forming load increased steadily up to the stroke when the nose part started to separate from the die while oscillating due to intermittent node detachment from the die. While the thick nose part passed the die orifice, the forming load increased to its local maximum, as indicated in Figure 10. From the stroke where the upper side of the material started to show plastic deformation, it increased steadily up to 214 tons. It is interesting to note that the decrease in forming load with stroke increase occurred near the instant of fracture, as shown in Figure 10, which can have direct or indirect effects on the fracture. passed the die orifice, the forming load increased to its local maximum, as indicated in Figure 10. From the stroke where the upper side of the material started to show plastic deformation, it increased steadily up to 214 tons. It is interesting to note that the decrease in forming load with stroke increase occurred near the instant of fracture, as shown in Figure 10, which can have direct or indirect effects on the fracture. We determined the damage values for all models and then normalized them by dividing by the critical damage. Figure 11 compares the predicted normalized damages in the nose part; the maximum values are listed in Table 2. All of the predicted normalized damage values were much lower than unity. In addition, all of the damage models predicted low damage on the surface of the fractured side, i.e., on the outer diameter. Therefore, we concluded that the actual fracture surface is unrelated to the ductile fracture. We determined the damage values for all models and then normalized them by dividing by the critical damage. Figure 11 compares the predicted normalized damages in the nose part; the maximum values are listed in Table 2. All of the predicted normalized damage values were much lower than unity. In addition, all of the damage models predicted low damage on the surface of the fractured side, i.e., on the outer diameter. Therefore, we concluded that the actual fracture surface is unrelated to the ductile fracture. Figure 11. Predictions of normalized damage.
New Approach to Brittle Fracture
Plastic deformation of material under compression results in the loss of a great deal of tensile strength capacity because of the Bauschinger effect; the material becomes brittle
New Approach to Brittle Fracture
Plastic deformation of material under compression results in the loss of a great deal of tensile strength capacity because of the Bauschinger effect; the material becomes brittle to some extent, depending on the material properties and magnitude of the plastic deformation [39,40]. When a plastically compressed material that has lost a considerable amount of its yield strength is elongated, the material may yield or fracture, even if the tension stress is smaller than the original yield strength or fracture stress. In the case of brittle fracture, the fracture surface is almost normal to the axis of maximum principal stress.
In the case of multiaxial stress and strain, we have to measure the "embrittlementeffective strain" to determine the degree of PDIE of the materials; this is done using a form of scalar function. For the three plane stress cases shown in Figure 12, we present the following compressive strain weighting index (CSWI), ξ, to exclude the ductile fracturerelated strain from the multiaxial stress or strain: where <x> is a mathematical operator taking a larger value between zero and x, and γ is defined by where σ 1 and σ are the maximum principal stress and effective stress, respectively. ξ 0 in Equation (1) can be dealt with as a material constant. When we assume that 1 − ξ 0 < γ > is non-negative in the state of non-positive mean stress in the plane stress case, ξ 0 ≤ √ 3 should be satisfied. Based on the zero mean stress case in Figure 12c, we assumed that ξ 0 = √ 3 for the zero-value condition of CSWI at zero mean stress.
to some extent, depending on the material properties and magnitude of the plastic deformation [39,40]. When a plastically compressed material that has lost a considerable amount of its yield strength is elongated, the material may yield or fracture, even if the tension stress is smaller than the original yield strength or fracture stress. In the case of brittle fracture, the fracture surface is almost normal to the axis of maximum principal stress.
In the case of multiaxial stress and strain, we have to measure the "embrittlementeffective strain" to determine the degree of PDIE of the materials; this is done using a form of scalar function. For the three plane stress cases shown in Figure 12, we present the following compressive strain weighting index (CSWI), ξ , to exclude the ductile fracturerelated strain from the multiaxial stress or strain: where <x> is a mathematical operator taking a larger value between zero and x, and γ is defined by where 1 σ and σ are the maximum principal stress and effective stress, respectively. Next, we checked the three special cases, 1, 1 and 1/ 3 , with respect to the cases in Figure 12a-c, respectively. Substituting these γ -values into Equation (1), we obtained weighting indices of 0, 1, and 0 for the three special cases, respectively. Note that the effects on fracture of the stresses shown in Figure 12a,c are fully accounted for by the damage model of ductile fracture, implying that the CSWI is able to reflect brittle fracture. With respect to the time parameter, the strain rate multiplied by the CSWI describes a type of compressive strain, i.e., embrittlement-effective strain, which is calculated as where B ε is defined as the degree of PDIE, which is used to evaluate the likelihood of brittle fracture. As the ductility continues to govern the material when the degree of PDIE is small, we have to assume the critical value of PDIE . B Cr ε , which is used to determine the conditions necessary for brittle fracture of the plastically compressed material. Next, we checked the three special cases, γ = 1, −1 and 1/ √ 3, with respect to the cases in Figure 12a-c, respectively. Substituting these γ-values into Equation (1), we obtained weighting indices of 0, 1, and 0 for the three special cases, respectively. Note that the effects on fracture of the stresses shown in Figure 12a,c are fully accounted for by the damage model of ductile fracture, implying that the CSWI is able to reflect brittle fracture.
With respect to the time parameter, the strain rate multiplied by the CSWI describes a type of compressive strain, i.e., embrittlement-effective strain, which is calculated as where ε B is defined as the degree of PDIE, which is used to evaluate the likelihood of brittle fracture. As the ductility continues to govern the material when the degree of PDIE is small, we have to assume the critical value of PDIE ε B.Cr , which is used to determine the conditions necessary for brittle fracture of the plastically compressed material. In addition, we adopted another weighting function, denoted as ζ(ε B ), called embrittlement function, to reduce the fracture stress based on the degree of PDIE and considering the Bauschinger effect, as follows where σ BF is the maximum allowable principal stress for the plastically compressed material and σ max is the maximum effective stress experienced. In this study, this weighting function was given by the following linear function: where B is a material constant, proposed in this study, to reflect the effect of the Bauschinger effect on reducing the allowable tensile stress of the embrittled material. Notably, the Bvalue can be obtained by tensile test of compressed material or bending test. Therefore, we evaluated the occurrence of brittle fracture when the maximum principal stress σ 1 exceeded σ max multiplied by the embrittlement function. As σ 1 keeps to change with the stroke in the state of failure if a function of crack generation is not adopted, the number of solution steps for such failures should be counted to estimate the potential for brittle fracture. Figure 13a shows the predicted compressive strain, i.e., the degree of PDIE when material and max σ is the maximum effective stress experienced. In this study, this weighting function was given by the following linear function: where B is a material constant, proposed in this study, to reflect the effect of the Bauschinger effect on reducing the allowable tensile stress of the embrittled material. Notably, the B -value can be obtained by tensile test of compressed material or bending test.
Therefore, we evaluated the occurrence of brittle fracture when the maximum principal stress 1 σ exceeded max σ multiplied by the embrittlement function. As 1 σ keeps to change with the stroke in the state of failure if a function of crack generation is not adopted, the number of solution steps for such failures should be counted to estimate the potential for brittle fracture. Figure 13a shows the predicted compressive strain, i.e., the degree of PDIE when Note that the critical value of PDIE, denoted by ε B.Cr , calculated qualitatively from the comparison between the experiment in Figure 2c and the predicted degree of PDIE in Figure 13a. At this moment, the quantitative interpretation or determination of the parameters could not be made because we did not couple the crack generation with this approach. Nonetheless, the predicted brittle fracture is greatly meaningful to avoid such a possible fracture in the process design stage, as can be seen from the comparison of Figures 2 and 13. Now, we consider the reason for 3% rate of crack occurrence. Figure 14 compares the microstructures of fractured and non-fractured materials. The fractured material is a typical mixture of pearlite and spheroidite with larger grains, which decreases ductility [41][42][43][44]. This is the same as the case reported by Watanabe et al. [31], indicating that heat treatment of the low-ductility material should be applied carefully, especially when the material is exposed to major changes in stress state after considerable plastic deformation. This type of brittle fracture may occur frequently at corners, where there is no die contact after severe plastic deformation, and under forward and backward extrusion. In all such cases, marked changes are seen in the stress state after severe plastic deformation due to compressive stress. The approach introduced herein is appropriate to predict the possibility of such brittle fractures during cold forging.
microstructures of fractured and non-fractured materials. The fractured material is a typical mixture of pearlite and spheroidite with larger grains, which decreases ductility [41][42][43][44]. This is the same as the case reported by Watanabe et al. [31], indicating that heat treatment of the low-ductility material should be applied carefully, especially when the material is exposed to major changes in stress state after considerable plastic deformation. This type of brittle fracture may occur frequently at corners, where there is no die contact after severe plastic deformation, and under forward and backward extrusion. In all such cases, marked changes are seen in the stress state after severe plastic deformation due to compressive stress. The approach introduced herein is appropriate to predict the possibility of such brittle fractures during cold forging.
Conclusions
The reasons for fracture in the nose part during the single-stage cold shell nosing process were examined based on the theory of ductile fracture, using various damage models, i.e., the Freudenthal, McClintock, CL and NCL, RT, BDR, NRMQ, OSOS, CMV, RTCL, KH, BWUL and BWNL, and LH models, after tensile testing for detailed investigation of the flow stress and ductile fracture behaviors of the materials. However, the models could not predict fracture around the nose part.
In this study, a practical methodology for predicting brittle fracture was presented based on the degree of PDIE of plastically deformable materials. A new concept, i.e., the CSWI, was proposed to calculate the embrittlement-effective strain, excluding the damage-effective strain from the multiaxial stress. The cumulative strain indexed by CSWI was taken to indicate the degree of PDIE, which affects the likelihood of brittle fracture of plastically deformable material, as well as the fracture stress due to the Bauschinger effect. The critical value of PDIE is a material and process parameter. Our methodology estimates the likelihood of brittle fracture only when the maximum principal stress is greater than the fracture stress, and the PDIE is greater than its critical value.
Assuming that the tensile strength decreases linearly with increasing PDIE, our methodology was applied to determine the reasons for fracture in an actual cold shell nosing process. Comparison of the predicted and experimental fracture shape and position indicated good quantitative agreement. Comparison of the microstructures of the materials between the failure and success cases also supported the conclusions of this study.
Conclusions
The reasons for fracture in the nose part during the single-stage cold shell nosing process were examined based on the theory of ductile fracture, using various damage models, i.e., the Freudenthal, McClintock, CL and NCL, RT, BDR, NRMQ, OSOS, CMV, RTCL, KH, BWUL and BWNL, and LH models, after tensile testing for detailed investigation of the flow stress and ductile fracture behaviors of the materials. However, the models could not predict fracture around the nose part.
In this study, a practical methodology for predicting brittle fracture was presented based on the degree of PDIE of plastically deformable materials. A new concept, i.e., the CSWI, was proposed to calculate the embrittlement-effective strain, excluding the damage-effective strain from the multiaxial stress. The cumulative strain indexed by CSWI was taken to indicate the degree of PDIE, which affects the likelihood of brittle fracture of plastically deformable material, as well as the fracture stress due to the Bauschinger effect. The critical value of PDIE is a material and process parameter. Our methodology estimates the likelihood of brittle fracture only when the maximum principal stress is greater than the fracture stress, and the PDIE is greater than its critical value.
Assuming that the tensile strength decreases linearly with increasing PDIE, our methodology was applied to determine the reasons for fracture in an actual cold shell nosing process. Comparison of the predicted and experimental fracture shape and position indicated good quantitative agreement. Comparison of the microstructures of the materials between the failure and success cases also supported the conclusions of this study.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,708.4 | 2021-03-24T00:00:00.000 | [
"Materials Science"
] |
Present status and first results of the final focus beam line at the KEK Accelerator Test Facility
ATF2 is a final-focus test beam line which aims to focus the low emittance beam from the ATF damping ring to a vertical size of about 37 nm and to demonstrate nanometer level beam stability. Several advanced beam diagnostics and feedback tools are used. In December 2008, construction and installation were completed and beam commissioning started, supported by an international team of Asian, European, and U.S. scientists. The present status and first results are described.
I. INTRODUCTION
An important technical challenge of future linear collider projects such as ILC [1] or CLIC [2] is the collision of extremely small beams of a few nanometers in vertical size. This challenge involves three distinct issues: creating small emittance beams, preserving the emittance during acceleration and transport, and finally focusing the beams to nanometers before colliding them. The Accelerator Test Facility (ATF) at KEK [3] was built to create small emittance beams, and has succeeded in obtaining emittances that almost satisfy ILC requirements. The ATF2 facility [4], which uses the beam extracted from the ATF damping ring (DR), was constructed to address the last two issues: focusing the beams to nanometer scale vertical beam sizes and providing nanometer level stability. ATF2 is a followup of the final focus test beam (FFTB) experiment at SLAC [5]. The optics of the final focus is a scaled-down version of the ILC design. It is based on a scheme of local chromaticity correction [6] which is now also used for the CLIC design, where symmetries are introduced in the optics to control all relevant aberrations up to third order.
The main parameters of ATF2 are given in Table I with the corresponding values for the ILC and CLIC projects. The value of y and hence the vertical beam size at the optical focal point [referred to as interaction point (IP) by analogy to the linear collider collision point] are chosen to yield a chromaticity of similar magnitude as in the ILC final focus. For the energy and emittance of the ATF beam and given the distance L* between the last quadrupole and the IP, this leads to a vertical beam size of about 37 nm, including residual effects from higher-order aberrations.
The layout of the ATF/ATF2 facility and the design optical functions of the ATF2 beam line are displayed in Figs. 1 and 2, respectively. The two main project goals are: goal 1-achieving the 37 nm design vertical beam size at the IP by 2010; and goal 2-stabilizing the beam at that point at the nanometer level by 2012.
Achieving the first goal requires developing and implementing a variety of methods to validate the design optics in the presence of imperfections, in particular beam measurement and tuning techniques to cancel unwanted distortions of the beam phase space. Before reaching the ATF2 final focus (see Fig. 1), the beam is extracted from the DR into a reconfigured version of the old ATF extraction line and transported in a matching and diagnostic section where beam parameters can be measured with wire scanners and where anomalous dispersion, betatron mismatch, and coupling can be corrected with a set of dedicated upright and skew quadrupole magnets.
Unlike the case of a linear collider where the measurements of luminosity and electromagnetic interactions between the colliding beams provide information on their respective sizes and overlap, ATF2 is a single beam line. Measuring transverse beam sizes at the IP requires dedicated beam instrumentation, especially a laser interferometer-based beam size monitor (BSM), also called Shintake monitor [7]. Optical adjustments to minimize the beam size at the IP are achieved mainly using combinations of sextupole magnet displacements for independent removal of the linear phase space correlations affecting the beam size.
To measure the beam orbit and maintain the beam size with feedback, the beam line magnets are equipped with submicron resolution cavity beam position monitors (BPM) and are placed on mechanical movers.
Both BSM and BPM measurements are essential to implement the tuning methods for the first goal.
ATF2 construction was completed in 2008 and first beam testing began in December of that year, focusing on the first goal. In addition, a number of studies and hardware development towards the second goal have proceeded in parallel (see Sec. IV). Since the ATF2 project relies on many in kind contributions and is commissioned and operated by scientists from several institutions in a number of countries spread out geographically over three continents, it is considered a model for the organization of the international collaborations which will be needed to build and operate future large scale accelerator projects such as the ILC. Planning and coordination are of crucial importance. The organization of the ATF collaboration and commissioning efforts are described in [3]. The commissioning strategy is designed to use the large international contribution efficiently. Training and transfer of knowledge, important to strengthen the accelerator community and prepare for future large projects, are emphasized. Beam operation time is divided giving 50% for ATF2, 30% for DR and injector related R&D, and 20% for maintenance and upgrades, in order to ensure richness of the overall program while providing sufficient time for the commissioning.
In this paper, the present status and performance of the recently deployed ATF2 systems are described, followed by the first experience with beam measurements and tuning during winter and spring 2009. In the last section, the immediate outlook of the project as well as several near future and longer term plans are outlined.
A. Magnets and magnet mover
The ATF2 beam line extends over about 90 meters from the beam extraction point in the ATF DR to the IP (see Fig. 3). It contains seven dipole, three septum, 49 quadrupole, five sextupole, and a number of corrector magnets at room temperature [8,9]. Some magnets were fabricated specially for ATF2 while others were reused from the old ATF extraction beam line and from beam lines at SLAC. Among the latter were the two quadrupole and sextupole magnets composing the final doublet (FD) system at the end of the beam line. The apertures of the FD quadrupole magnets were increased to accommodate the large function values in the FD. Careful magnetic measurements of all newly built magnets and of the modified ones in the FD were done to check and control their higher-order multipole contents. In the last focusing quadrupole magnet within the FD system, where the horizontal beam size reaches its largest value in the system, the tolerances to enable the nominal IP beam parameters to be achieved were slightly exceeded. Several possibilities to readjust the optics design have been studied to mitigate the resulting deterioration of the vertical beam size [10,11].
Dipole and quadrupole magnets in the extraction line were fixed on stainless steel supports bolted to the floor while final focus magnets were fixed on support blocks in concrete glued to the floor with adhesive polymer concrete. Vertical and horizontal positions and tilts of both sets can be adjusted manually with bolts during alignment. Anticipating gradual movements of supports and magnets due to thermal variations or slow ground motion, 20 quadrupole and five sextupole magnets in the final focus were put on remote-controlled three-axis movers recycled from the FFTB experiment. Each mover has three camshafts for adjustments of horizontal and vertical positions (with precision of 1-2 m), and for rotations about the beam axis (roll, with precision of 3-5 rad). By combining horizontal and vertical motions of these magnets, both trajectory and linear optics distortions can be corrected.
Magnet production and refurbishment was carried out from mid-2005 through mid-2008. Installation and alignment was completed in 2008. Overall alignment precisions of 0.1 mm in the three directions and 0.1 mrad in roll angles have been achieved using conventional alignment/metrology techniques. Commissioning and operation of all magnets has proceeded smoothly. The integrated strength data from the magnetic measurements is taken into account in the power supply control software to set required currents and define the standardization procedures. The final alignment of the magnets will be achieved via beam based alignment (BBA) techniques [12].
B. Final doublet stability
The FD is composed of two quadrupole and two sextupole magnets named QD0, QF1 and SD0, SF1. These magnets must be supported in a way which ensures that their jitter relative to the IP where the BSM is located is smaller than 7 nm, in order to limit effects on the measured beam size to less than 5%. Because of the low beam repetition rate of about 1 Hz, such stability is needed from about 100 Hz, above which ground motion becomes small enough, down to about 0.1 Hz, below which beam based feedback methods can be used. A rigid support was chosen since the coherence length at ATF2, of about 4 m in this frequency range [13], exceeds the distance between the FD and IP, hence strongly suppressing their relative motion. A rigid honeycomb block from Technical Manufacturing Corporation was used, supported on a set of steel plates which covered most of its base and were tied to the floor with bolts. A thin layer of natural beeswax was then used between the plates and the honeycomb block to ensure good mechanical coupling as well as ease of removal [14]. New supports were made and put under the FFTB movers so the magnets' centers reached the 1.2 m beam height. Vibration measurements with the table fixed to the floor and all magnets and movers installed were performed in the laboratory for prior validation, including checking potential effects from cooling water flowing in the magnets [15]. The whole system (see Fig. 4) was installed at KEK in September 2008, where additional measurements [16] were performed confirming that the residual motions of the magnets relative to the BSM were within tolerances (see Table II).
C. Cavity beam position monitors
The ATF2 beam line is instrumented with 32 C-band (6.5 GHz) and four S-band (2.8 GHz) high resolution cavity beam position monitor systems. In addition to these dipole cavities there are four C-band and one S-band reference cavities to monitor beam charge and beam arrival phase. In the diagnostics and final focus section every quadrupole and sextupole magnet is instrumented with such a BPM. The C-band position sensitive cavities are aligned to the quadrupole magnet centers using mounting fixtures while the S-band cavities are mounted next to the FD magnets. All cavities are cylindrical resonant cavities, with rectangular waveguide couplers to select the dipole, position sensitive, mode. The cavity output is filtered, mixed, amplified, and filtered again to produce an intermediate frequency between 20 and 25 MHz. The down-converted signal is acquired using 100 MHz, 14-bit digitizers with a virtual machine environment interface. The dipole signals are processed using a digital down-conversion algorithm running on a dedicated CPU. The data is acquired, the processing algorithm controlled, and position data distributed via an EPICS channel access application. The radio frequency (rf) electronics and digital downconversion algorithm are monitored by injection of a suitable frequency triggered rf tone instead of cavity signal. The tone calibration system can be used to monitor the overall electronics and algorithm health without a beam in ATF2.
During the initial ATF2 commissioning, work on cavity BPMs has focused on calibration and on usage of the system for BBA and dispersion measurements. Two types of calibration are required, quadrupole mover calibration and beam deflection using upstream horizontal and vertical corrector magnets (for BPMs on quadrupole magnets without a mover system). The mover calibrations were easy and proved most successful during the initial operation, while the beam deflection calibration suffered from problems with signal saturation and the need to rely on the optics model to propagate beam trajectories. The mover calibration consisted of moving each quadrupole in steps of 100 m over a total range of 400 m while recording the cavity response for 10 machine pulses at each position. The dynamic range of the cavity system was found to be greater than the range of possible motions of the quadrupole movers ( AE 1:5 mm) The resolution of the C-band system is about 1 m at this moment, although a full analysis with beam motion jitter subtraction has yet to be performed. More details on the cavity design, fabrication, and performance can be found in [17][18][19].
D. Beam size monitor
The beam size monitor used to measure the beam size at the IP is based on inverse Compton scattering between the electron beam and a laser interference fringe pattern [7]. In such a monitor, the energy of the generated gamma rays is typically rather small compared to that of bremsstrahlung photons composing the main background (emitted when beam tail electrons interact with apertures and start showering). In the monitor designed for ATF2 [20], the signal is separated from this high energy background by analyzing the longitudinal shower profile measured with a multilayered detector (located a few meters after the IP after a dipole magnet) [21]. The laser wavelength used is 532 nm, the 2nd harmonic of the Nd:YAG laser, providing a suitable fringe pitch to measure the target vertical size of 37 nm. Four laser beam crossing modes are available to provide a broad dynamic range of up to several microns for the initial beam tuning down to the nominal beam size or less. In addition, a laser wire mode can be used for horizontal beam size measurements.
The system was installed on a rigid mount support [22] at the end of the beam line during summer 2008. After a first checkout with beam in December 2008, commissioning started in 2009 with the laser wire mode. This mode of operation was successfully and reproducibly established during winter and spring runs. The method to properly set up the electron and laser beams was developed experimentally using diagnostics and instrumentation available for both beams. It consists of four main steps: (i) carefully tuning the electron beam trajectory to reduce backgrounds, (ii) aligning the photon detector onto the electron beam axis at the IP, (iii) checking the synchronization of both beams, and (iv) scanning the laser beam horizontally to overlap its waist with that of the electron beam. Figure 5 shows an example of signal intensity as a function of laser position in the horizontal plane. The relative accuracy of the signal intensity measurement, obtained analyzing the longitudinal profile of the shower in the multilayered photon detector, ranges from 10% to 20%. The line shows a Gaussian fit to the data. The measured horizontal size was 13 microns, consistent with the expectation from folding the 10 microns design waist size of the laser wire with the beam size of about 10 microns available during the run and confirmed by wire scanner measurements downstream of the IP [23].
E. Other beam line instrumentation
The instrumentation from the old ATF extraction line [strip line BPMs, ICTs, optical transition radiation (OTR), screen profile monitors, and wire scanners] is reused in the reconfigured beam line. There are five wire scanners with tungsten and carbon wires of 10 and 7 m diameter, respectively, located in the diagnostic section upstream of the final focus section (see Fig. 1). They are used to measure the horizontal and vertical beam emittances after extraction from the DR. An additional wire scanner is installed just downstream of the IP for beam size tuning and has tungsten and carbon wires of, respectively, 10 and 5 m in diameter. Screen monitors are located right after the extraction, in the middle of the beam line, and before and after the FD. An optical fiber beam loss monitor is installed all along the beam line to localize and quantify beam losses in a relative sense.
F. Beam line modeling tools
The successful tuning of the ATF2 beam line relies on many automated software tools prepared and tested throughout the collaboration. To facilitate broad participation in the corresponding tasks, a ''Flight Simulator'' software environment was designed as a middle layer between the existing lower level ATF control system based on EPICS and V-system and the higher-level beam dynamics modeling tools [24]. This is a ''portable'' control system for ATF2 that allows code development and checkout offsite and additionally provides the framework for integrating that code into the operational ATF2 control system. The software developed through the flight simulator is developed mainly through the LUCRETIA [25] package while various ''add-on'' packages are also supported to enable usage of MAD8, PLACET, and SAD [26][27][28] optics programs. It is used in the ATF2 control room alongside tools developed through the existing V-system interface. Tools currently in use include: extraction line coupling correction, extraction line dispersion measurement and correction, extraction line and final focus orbit monitoring and steering, optical tuning knobs for the IP spot size based on moving sextupole magnets, BPM display and diagnostics tools (orbit plotting, reference save/restore system, offline calibration of strip-line BPMs), watchdog tools (e.g. monitoring of the beam orbit in critical apertures, magnet strengths, online optics checks, model response matrix), magnet standardization, orbit bump and BBA tools to extract the offsets between BPMs and quadrupole or sextupole magnet centers.
A. Overview of commissioning runs
Since the beginning of commissioning at the end of December 2008, five commissioning runs were dedicated to ATF2, each two or three weeks long. In the December 2008 run, only some of the magnets were turned on, in a configuration with a large à . The beam was brought to its dump with minimal beam losses to pass a radiation inspection required at KEK and to enable basic hardware and software checks. The following runs in February-March 2009 also used a large à (8 cm horizontally and vertically), this time with all ATF2 magnets switched on for the first time and an optical configuration with basic features similar to the nominal optics [29]. For such values of the à parameters, respectively 20 and 800 times larger than the nominal values, beam sizes in the FD are reduced by the square root of the same factors. This was important to ease requirements for backgrounds while producing IP beam spots with x;y $ 12:5, 1-2 m which were measurable by the BSM in laser wire mode or were just below the resolution limit of the tungsten post-IP wire scanner. In the most recent April and May 2009 runs, the vertical à was reduced to 1 cm, corresponding to an IP spot of y $ 0:5 m. As the chromaticity is not yet predominant for such a value, sextupole magnets had little influence and could be turned off. In parallel with the gradual deployment of software control tools and continuous testing and characterization of the BSM and of the cavity and strip line BPMs, first measurements of the optical functions and beam parameters were also pursued.
B. Beam tuning strategy
Focusing the low emittance beam extracted from the ATF DR to the specified IP beam size requires correcting trajectory and optics distortions induced both by imperfections along the beam line and by mismatch of the beam phase space at DR extraction. While final corrections must be done at the IP, it is still important to keep mismatches under control at the entrance of the final focus, in order to limit distortions of the linear optics in the carefully tuned chromatic correction section and to minimize backgrounds in the BSM from bremsstrahlung, which can be emitted and reach the detector when beam tail particles reach the vacuum chamber at high-points of the optics and start showering.
The beam tuning sequence followed in successive shifts during April and May 2009 runs was (i) bring the beam to the dump with maximal transmission using the chosen magnet configuration and flatten the trajectory, (ii) successive BBA in selected ''critical'' quadrupole magnets, (iii) dispersion measurement in the diagnostic and matching section, followed by correction using the upright and skew quadrupole magnets in the extraction line, (iv) emittance and Twiss parameter measurements combined with coupling correction with the system of dedicated skew quadrupole magnets in the diagnostic and matching section, (v) horizontal and vertical waist-scan and dispersion measurements with the FD set to focus the beam at the post-IP wire scanner, in order to infer à , (vi) if needed, rematch à to its target values using dedicated quadrupole magnets immediately upstream of the final focus section, (vii) vertical beam spot minimization at the post-IP wire scanner by canceling residual dispersion and coupling, using orthogonal combinations of vertical motions of sextupole magnets in the final focus and, alternatively, the set of upstream skew quadrupole magnets, (viii) reset the FD for IP focusing to enable BSM measurements, and (ix) if backgrounds are too large in the BSM, rematch à in the horizontal plane to larger values.
C. Extracted vertical beam emittance and betatron matching
Vertical emittances of less than 10 pm were consistently achieved in the DR during spring 2009 [30]. After extraction to ATF2, several effects can however enlarge it, espe-cially anomalous dispersion and coupling remaining from the DR or generated in the extraction process. In the March 2009 run and during earlier tests in 2007-2008 [31] before reconfiguring the extraction line for ATF2, large growth factors were often observed. In April and May, systematic BBA in selected quadrupole magnets of the extraction line, followed by careful corrections for residual dispersion and coupling, enabled the reproducible measurement of vertical emittance values in the 10 to 30 pm range. Figure 6 shows results from one of the measurements during May 2009. The horizontal and vertical emittances were 1.7 nm and 11 pm, respectively, with rather good horizontal matching but some apparent mismatch vertically, presumably partly due to not having fully corrected the residual coupling.
D. Measurements of first-order optics at IP
In the large à optics used, IP beam sizes are essentially determined by the first-order optical transfer matrices, higher order effects being negligible. Beam size measurements at the IP can thus serve to check the first-order optics, by comparing with nominal values or propagating Twiss parameters measured upstream. Waist scans can be done after setting the FD to focus the beam in both planes at either the BSM or post-IP wire scanner, using orthogonal [43]. In the vertical dimension, BmagY exceeds unity indicating apparent mismatch. QD0, QF1 combinations (for independent control in each plane) or just QD0. From the parabolic dependence of the square of the beam size with respect to quadrupole magnet strength, the emittance and Twiss parameters can be computed. The values obtained are however biased if the beam size at the minimum of the parabola is below the instrumental resolution or if there is significant residual dispersion or coupling.
In the horizontal plane, the nominal beam size of about 12:5 m could be measured with both the post-IP tungsten wire scanners and BSM laser wire mode. Since the horizontal emittance is much larger than the vertical one, residual coupling has a negligible effect. However, there is significant horizontal dispersion in the nominal optics near the IP (see Fig. 2), resulting from the local chromaticity correction scheme, which must be accounted for along with any residual mismatch propagated from imperfectly corrected upstream errors. Figure 7 shows an example of horizontal waist scans from May 2009 [32]. The extracted emittance and à values at the minimum, " x ¼ ð1:13 AE 0:06Þ nm and x ¼ ð13 AE 1Þ cm, could be compared with the values expected at the post-IP wire scanner in the design optics (2 nm, 10 cm) as well as with the ones obtained propagating the measurements made in the extraction line in a previous shift (1.7 nm, 14.5 cm). A full analysis to evaluate the significance and possible origin of differences has yet to be performed.
Similar waist scans were recorded in the vertical plane using the 10 m diameter post-IP tungsten wire scanner. Since the expected minimum value could not be resolved in these scans, they were used only to evaluate the divergence of the beam near the IP. Combining with the emittance measured in the matching section, estimates of y could be obtained in the assumption of negligible residual coupling. Data recorded in the last week of May 2009 gave, for instance, y $ 10 mm after correcting for anomalous vertical dispersion measured during the scan, somewhat less than the nominal value expected at the post-IP wire scanner (18 mm).
E. Vertical spot minimization at the IP
To cancel residual vertical dispersion and coupling at the IP, the best results obtained so far were with a procedure consisting of sequentially using six skew quadrupole corrector magnets (as well as a pair of normal quadrupole magnets in the extraction and matching sections) to empirically reduce the vertical beam spot size at the post-IP wire scanner down to about 3 m, corresponding to the resolution limit of the 10 m diameter tungsten wire which was used.
IV. ATF2 OUTLOOK AND PLANS
The present ATF2 efforts of the ATF collaboration are focused on the first ATF2 goal. The priority for the next runs during autumn 2009 is to measure submicron vertical beam sizes using the interference mode of the BSM. This will involve continued operation with the large à optics (8 cm horizontally and 1 cm vertically) in order to confirm its properties in more detail. Vertical dispersion and coupling effects at the IP will need to be corrected well enough to reduce the vertical beam size to below about 1 m, corresponding to the resolution limit of the 5 m carbon wire scanner behind the IP. The tuning sequence outlined in the previous section will be followed.
During the summer shutdown in 2009, a number of improvements were made, especially in the BSM and BPM systems. The magnets in all the beam lines of the ATF facility were also realigned.
The BSM will use a laser crossing angle of 4.5 degrees, for which the sensitivity is maximal for vertical beam sizes of about 1 m. A new 3 times more powerful laser will enhance the signal significance with respect to the background. Additional collimation in front of the photon detector will also help to reduce the background from bremsstrahlung emitted upstream. Moreover, wire scanner, screen, and knife edge monitors were newly installed in the IP chamber of the BSM to make it easier to overlap the electron and laser beams.
The performance of the cavity BPMs was extensively studied to characterize and improve stability and reproducibility in signal amplitudes and phases over long periods of up to a month. The electronics of the strip line BPMs is also Blue crosses are before and red crosses are after correcting for the measured horizontal dispersion. Green and pink lines are the corresponding parabolic fits. The shift in abscissa between corrected and uncorrected parabolas is due to anomalous horizontal dispersion. being upgraded to suppress residual kicker noise picked up on the electrodes, shown to degrade performances, and to enable more reliable calibration.
During 2010, the goal will be to reduce the à parameters enough towards the nominal values for vertical beam sizes smaller than 100 nm to be measured. Preparations towards this goal are on-going in parallel with the above tasks. In particular, a new system with multiple OTR stations is being prepared in the diagnostic section to supplement the existing wire scanners and enable speedier and more precise 2D profile measurements. The improved Twiss parameter, emittance, and coupling determinations which will result should help to minimize the extracted vertical emittance. Two additional tasks related to the BSM are also important to achieve the first goal in 2010: improved automation of the BSM data acquisition along with integration into the overall software environment for beam size tuning at the IP, and evaluation and control of BSM beam induced backgrounds, in particular as a function of à .
The ATF collaboration also pursues several other hardware developments of particular relevance to future linear colliders, especially in the context of the second ATF2 goal: characterization of the site and beam line stability [33], the MONALISA interferometer system [34] for accurate monitoring of the FD position with respect to that of the BSM, the feedback on nanosecond time scale (FONT) project [35], the nanometer resolution IP-BPM project [36], the fast nanosecond rise time kicker project [37], and a new cavity-BPM optimized to monitor angular variations of the beam near the IP with high accuracy [38]. A laser wire system operated in the old ATF extraction line during 2005-2008 with the aim to demonstrate 1 m resolution beam size measurements [39] has also been moved to a new location in the ATF2 diagnostics section for further testing and development in coming years. In the future, this system could be expanded to replace some or all present wire scanners. Future linear colliders are expected to rely extensively on laser wire systems, so it is important to gain experience operating a multiple system in realistic conditions. Plans to upgrade the performance of ATF2 on the time scale of a few years, after the main goals of ATF2 have been achieved, are also under consideration. In particular, optical configurations with ultralow à values (2 to 4 times smaller than nominal in the horizontal and vertical planes), relevant to both the CLIC design and to some of the alternative ILC beam parameter sets [1], are actively studied [2]. There is also a proposal to upgrade the FD with superconducting magnets [40] built according to ILC direct wind technology, to allow stability studies with beam of direct relevance to the setup planned at ILC. An R&D program to develop a tunable permanent magnet suitable for the FD is also pursued in parallel, with as an initial goal the construction of a prototype for initial beam testing in the upstream part of the ATF2 beam line [41]. Since possibilities to achieve the smallest vertical beam sizes are limited, especially in the case of reduced à values, both by the field quality in the magnets of the presently installed FD [10,11] and by their aperture (to avoid excessive bremsstrahlung photon background in the BSM), these proposals are naturally connected in the sense that an upgraded FD should also aim to both enlarge the aperture and improve the field quality.
Longer term, more tentative, plans being discussed include, after 2012, the possibility of a photon facility, with laser and optical cavities for the planned photon linear collider and generation of a photon beam. Strong QED experiments with laser intensities of >10 22 W=cm 2 could then also be considered, e.g., to pursue experimental studies of the predicted Unruh radiation [42].
V. CONCLUSION
The ATF collaboration has completed the construction of ATF2 and has started its commissioning. Important experience operating the new cavity BPM and BSM instrumentation in real conditions has been gained and first beam measurements have been performed in a magnetic configuration with reduced optical demagnification. Both horizontal and vertical emittances were successfully tuned and measured in the extraction line, with values approaching the design values of 2 nm and 12 pm, respectively. First checks of the first-order optics along the beam line and at the IP were also done. Hardware developments for the second ATF2 goal are being pursued in parallel with the present commissioning work for the first goal. The collaboration is also preparing several near and long-term plans for ATF2. In the next few years, information very valuable for any future collider with local chromaticity correction and tuning of very low emittance beams can be expected. In the previous experience at the FFTB, the smallest vertical beam sizes which were achieved were about 70 nanometers. The work described here continues to address this largely unexplored regime in a systematic way. | 7,372.4 | 2010-04-21T00:00:00.000 | [
"Physics"
] |
Acaricidal Properties of Bio-Oil Derived From Slow Pyrolysis of Crambe abyssinica Fruit Against the Cattle Tick Rhipicephalus microplus (Acari: Ixodidae)
Slow pyrolysis is a process for the thermochemical conversion of biomasses into bio-oils that may contain a rich chemical composition with biotechnological potential. Bio-oil produced from crambe fruits was investigated as to their acaricidal effect. Slow pyrolysis of crambe fruits was performed in a batch reactor at 400°C and chemical composition was analyzed by gas chromatography-mass spectrometry (GC-MS). The bio-oil collected was used in bioassays with larvae and engorged females of the cattle tick Rhipicephalus microplus. Biological assays were performed using the larval packet test (LPT) and adult immersion test. The GC-MS of crambe fruit bio-oil revealed mainly hydrocarbons such as alkanes and alkenes, phenols, and aldehydes. The bio-oil in the LPT exhibited an LC90 of 14.4%. In addition, crambe bio-oil caused female mortality of 91.1% at a concentration of 15% and a high egg-laying inhibition. After ovary dissection of treated females, a significant reduction in gonadosomatic index was observed, indicating that bio-oil interfered in tick oogenesis. Considering these results, it may be concluded that slow pyrolysis of crambe fruit affords a sustainable and eco-friendly product for the control of cattle tick R. microplus.
INTRODUCTION
A major concern in livestock is the infestation in cattle by the tick Rhipicephalus microplus (Canestrini and Fanzago, 1887) (Acari: Ixodidae). This tick can be found in different places in the world with a tropical and subtropical climate and is responsible for a loss of 22-30 billion dollars a year in the livestock industry (Lew-Tabor and Valle, 2016;Ali et al., 2019). Blood spoliation by R. microplus causes a reduction in weight gain of bovine and, consequently, decreases the production of meat and milk. In addition, this tick is a vector of infectious agents that cause babesiosis and anaplasmosis (Araújo et al., 2015).
The most used strategy for the control of R. microplus is using synthetic acaricides. However, these acaricides are mostly toxic to humans, animals, and the environment. In addition, resistance to acaricides has been reported and has become one of the major obstacles in tick control programs. There are populations of ticks with multiple resistance, with cases of resistance to six classes of acaricides also to associations (Fernández-Salas et al., 2012;Reck et al., 2014;Higa et al., 2015;Klafke et al., 2017). Therefore, it becomes necessary to develop new products for the control of R. microplus that are safe for the environment, human, and animal health, and have a low cost.
A new area that has been investigated is the use of products pyrolysis from plant biomass, which has a high biotechnological potential. Plant biomass and agroindustrial residues can be thermochemically converted by pyrolysis. Pyrolysis is a process where an organic matter is subjected to high temperatures (300-1,000 • C) in the absence of oxygen (Bridgwater, 2012), generating as products: biochar, pyrolytic liquid, and combustible gases. Pyrolytic liquid can be separated by density difference in an aqueous fraction and a bio-oil (organic fraction) (Czernik and Bridgwater, 2004;Kraiem et al., 2016). The composition of bio-oils varies with the biomass used, being a complex mixture of organic compounds with different chemical functions. Biooils obtained from lignocellulosic biomass contain phenolic derivatives, hydroxyaldehydes, hydroxyketones, sugars, and carboxylic acids, mainly acetic acid and formic acid. This chemical composition is due to the depolymerization and fragmentation reactions of the three main constituents of plant biomass: cellulose, hemicellulose, and lignin (Mohan et al., 2006).
Pyrolytic liquids have been the subject of studies for several purposes, mainly as an alternative to fuel production. In recent years, some research groups have revealed the biopesticidal effect of bio-oils against organisms such as insects, bacteria, and fungi (Mattos et al., 2019). The pyrolysis process has great socioenvironmental potential, for generating renewable products and having a reduced emission of greenhouse gases (Silva et al., 2014).
Crambe (Crambe abyssinica Hochst) is an annual herbaceous plant of the Brassicaceae family, native to eastern Africa and the Mediterranean region, which has a low production cost and high tolerance to grow in different climatic conditions (Falasca et al., 2010). Crambe fruit is a small sphere-shaped siliqua about 2 mm in diameter, consisting of a thin pericarp and a single seed that is covered by a thin brown husk. The pericarp, which remains attached to the seed, possesses around 30% of the total mass of fruit, has a high content of lignin (40%) and cellulose (41%). The fruit contains about 21% protein, 16-18% fiber, and 30-44% of a nonedible oil that has a high content of erucic acid (C 22:1 ), a raw material to produce industrial lubricants, synthetic rubber, plastic films, nylon, and adhesives (Falasca et al., 2010;Ionov et al., 2013;Hu et al., 2015;Bassegio et al., 2016;Samarappuli et al., 2020).
Considering the need for alternatives to control the cattle tick R. microplus, the study of the biotechnological potential of pyrolysis bio-oil can provide the development of new products with acaricidal properties. This work aimed to determine the chemical composition of bio-oil derived from slow pyrolysis of crambe fruit, and the acaricidal activity on larvae and engorged females, and the effect on reproduction of R. microplus.
Biomass and Bio-Oil Production
Crambe fruits were supplied by the MS Foundation (Maracajú, Mato Grosso do Sul, Brazil; 21 • 37 49 S, 55 • 09 37 W). The bio-oil from crambe fruit (dry fruit consisting of the pericarp, seed, and husk) was obtained through the slow pyrolysis carried out in a batch reactor of the Laboratory of Synthesis, Chromatography, and Environment (SINCROMA). The reactor consists of: A Heraeus R/O 100 oven; a fixed bed of borosilicate type glass with ground joints, dimensions 1.40 cm × 10 mm; a temperature controller and operating time; a liquid collection system consisting of a condenser, a settling funnel (500 ml) and gas scrubber flasks.
Fifty grams of crambe fruit were subjected to slow pyrolysis at 400 • C. The biomass was placed in the central region of a cylindrical glass tube which was introduced into the reactor and connected to the condensation system. Nitrogen gas was continuously applied, 500 ml/min, before and throughout the process. The sample was subjected to a heating rate of 10 • C/min and the temperature was then held at 400 • C for 2 h. The pyrolytic biochar was trapped in the middle of the reactor and collected after cooling. The non-condensable gases passed through a gas scrubber system and were bubbled in water. The condensable gases, when passing through a condenser, generated the pyrolytic liquid. The liquid fractions were separated from the organic phase (bio-oil) by density difference in a separating funnel.
Gas Chromatography-Mass Spectrometry (GC-MS) Analysis
The characterization of bio-oil was carried out by gas chromatography coupled to mass spectrometry (GC-MS), after fractionation by classical liquid chromatography, using hexane and dichloromethane eluents.
The analyzes by GC-MS were carried out in a Shimadzu apparatus (QP2010), automatic sampler QP, coupled to the Mass Spectrometer, using a fused silica capillary column DB5-MS (20 m × 0.18 mm in internal diameter, 0.18 µm of phenyl polydimethylsiloxane). Helium was employed as a carrier gas at a flow rate of 0.6 ml min −1 and the injector (mode split, 1:10) temperature was 280 • C. The initial oven temperature was 40 • C (5 min hold) and was ramped to 230 • C at 5 • C min −1 and 230 • C during 10 min. The mass detector was operated under the following conditions: the temperature at 250 • C, electron ionization energy, 70 eV; scan range, 40-600 Da; MS interface temperature were maintained at 200 • C.
Compounds were identified by comparing mass spectra with the National Institute of Standards and Technology 147 mass spectral library, considering a similarity equal to or greater than 85%. The semiquantitative analysis was done by normalizing the areas of the identified substances.
Preparation of Samples for Acaricidal Test
Crambe fruit bio-oil dissolved in an aqueous solution with 5% of tween 80 (v/v) as an emulsifying agent. Solutions were prepared to obtain the concentrations of 25, 20, 15, 10, and 5% (v/v) for larvae test and 25, 15, and 10% (v/v) for female test. The emulsifying solution (5% tween 80) and distilled water were used as the negative controls. Commercial products containing amitraz (Triatox/MSD saúde animal) and deltamethrin (Butox/MSD saúde animal) were used as positive controls at concentrations of 250 and 25 µg/ml, respectively.
Ticks for Bioassays
Rhipicephalus microplus ticks of the Porto Alegre, Brazil strain were maintained on infested Hereford bovines acquired from a tick-free area. All bovines were housed in individual tickproof pens on slatted floors in the Faculty of Veterinary from the Federal University of Rio Grande do Sul, Porto Alegre (UFRGS) and Institute of Veterinary Research Desidério Finamor (FEPAGRO), Brazil (Reck et al., 2009;Ali et al., 2016). After the cycle on the host is completed, fully engorged females dropped from calves were thoroughly washed with tap water and dried on a filter paper towel. Part of the engorged females was used for the adult immersion test (AIT), while others were kept in a bio-oxygen demand (BOD) incubator at 28 • C and 70-80% relative humidity (RH) for approximately 20 days to obtain eggs and larvae, which were later used in biological assays.
All experiments were conducted following the guidelines of the Ethics Committee on Animal Experimentation of UFRGS and FEPAGRO, Brazil (institutional approval number 14403).
Larval Packet Test (LPT)
The larval packet test (LPT) was performed following the methodology defined by FAO (2004). The filter paper packages (3 × 3 cm) were impregnated with 180 µl solution uniformly distributed with a pipette on both sides. About 100 tick larvae, aged around 14-21 days old, approximately, were added to each filter paper package, and the ends were sealed with a staple. The packets were placed in a BOD incubator at 28 • C and 70-80% RH for 24 h. After 24 h, the envelopes were opened and inspected using a stereoscope, to record the number of live and dead larvae. Larvae with walking ability are considered alive. Larvae that are immobile or that move but cannot walk are classified as dead. The test was repeated three times with different batches of larvae and performed in duplicate.
Adult Immersion Test (AIT)
The AIT was performed as described by Drummond et al. (1973) with minor modifications. Ticks were distributed to groups randomly (15 engorged females per group). The groups of R. microplus were immersed for 1 min in 3 ml solution of the respective concentrations (25, 15, and 10%) of bio-oils. After this period, ticks were removed from the solution with the aid of a sieve, distributed in Petri dishes (9 cm diameter, 1.5 cm high), weighed, and kept in a BOD incubator at a temperature of 28 • C and 70-80% RH. The mortality of the females was evaluated daily for 15 days. The dead ticks were diagnosed using parameter increasing cuticle darkness, hemorrhagic skin lesions, and stopped Malpighian tube movement observed in a stereomicroscope (Pirali-Kheirabadi et al., 2009). After 15 days, the eggs laid were placed in a glass tube, weighed, and observed separately, using the same condition of incubation for the next 30 days for visual estimation of hatching rate. This experiment was performed three times in duplicate.
The percentage inhibition of oviposition (IO) was calculated according to Singh et al. (2015), as follows: Reproductive index (RI) = average weight of eggs laid (mg)/average weight of females before treatment (g).
Percentage inhibition of oviposition (IO%) = (RI of the control group -RI of treated group/ RI control group) × 100.
Morphometry of Ovaries
Engorged females of R. microplus were incubated with crambe fruit bio-oil solution at a concentration of 25%, following the methodology of AIT, and then kept in a BOD incubator, with 70-80% RH and 28 • C. The ticks were subsequently dissected 24, 48, and 72 h after immersion, with the aid of a stereomicroscope, using a 0.01 M phosphate-buffered saline solution. The ovary was removed and weighed. A total of 45 females/treatments were used.
To quantify the interference of pyrolysis bio-oil in the development of the ovary, the gonadosomatic index (GSI) was calculated by dividing the total weight of the ovary by the mean body weight of each group of females, both in the control and treated groups, for each period evaluation (Barbosa et al., 2016).
The percentage inhibition of ovarian development (IOD%) was calculated as follows:
Statistical Analysis
Data were expressed as the mean ± SD of the mean. The efficacy was assessed by measuring tick mortality (%) and the lethal concentrations for 50% (LC50) and 90% (LC90) with their 95% confidence limits values were estimated by applying regression equation analysis to the probit data of mortality. Groups were compared using the one-way ANOVA and the Tukey test. A p value less than 0.05 was considered significant. Statistical analysis was performed using GraphPad Prism 6.0 software.
Bio-Oil Composition
The yield of coal and pyrolysis liquids were estimated from the mass of each product concerning the mass of raw material, and yield values of the gaseous product were obtained by difference. The yield of crambe fruit pyrolysis products is related to the mass of each product compared to the mass of the raw material used in the process, and the yield values of the gaseous products were obtained by difference. The pyrolysis of the crambe fruit generated a higher bio-oil yield than the aqueous fraction ( Table 1). The total mass yield of bio-oil was 34% (w/w). The mains organic functions identified by GC-MS of compounds present in bio-oil were: hydrocarbons, phenol, aldehydes, heterocyclic nitrogenous, ketones, nitrile, amides, and ester. Analysis of hexane fraction showed the presence of substances derived mainly from pyrolysis of triacylglycerides contained in the seed, such as linear alkanes, alkenes, and alkynes, alkadienes, mononuclear alkylbenzenes, aldehydes, carboxylic acids, and esters. All alkanes and alkenes identify in this fraction had similarities above 95%. High molecular mass nitriles (C 16 -C 19 ) were also identified in this fraction. Semiquantitative analysis of hexane fraction performed by normalizing the areas of all identified compounds showed that major compounds were: heneicos-9-ene, from the thermochemical conversion of erucic acid; heneicos-1ene; hexadec-11-al; heptadec-1-ene; and heneicosane. We were identified this compounds in a small proportion a phytosterol (β-tocopherol), high mass molar hydrocarbons, and steroids as ergostenol, ergostene, stigmastenol, and cholestanone. Some of these substances have already been identified in crambe fruit biooil (Silva et al., 2019). In fraction eluted with dichloromethane, the predominance was phenols and methoxy-phenols from the thermal decomposition of the lignocellulosic part of the pericarp and N-heterocycles compounds such as pyridines, pyrroles, and pyrazines. Nitrogen compounds come from the pyrolysis of crambe fruit seed proteins. Phenolic compounds were the majority in this fraction ( Table 2).
Acaricidal Bioassay With Ticks Larva
The acaricidal effect of bio-oils was tested using larvae of R. microplus. In LPT it was observed that the crambe fruit bio-oil, in all concentrations, caused significant mortality of R. microplus larvae 24 h after treatment. At a concentration of 15%, crambe bio-oil reached 91.6% of larval mortality ( Table 3). The commercial acaricide amitraz was not effective against larvae and deltamethrin caused low mortality of larvae (45%). The calculation of regression analysis of larvae showed an LC50 and LC90 of 7.6 and 14.4%, respectively ( Table 4). concentration (v/v), the effect of bio-oil was less than that found by the commercial acaricide deltamethrin.
Acaricide Bioassays With Ticks Female
In the AIT the efficacy of treatment against engorged females was also evaluated by measuring egg production. Results showed that the IO of the crambe fruit bio-oil differed significantly from the negative controls (p < 0.0001) at all concentrations tested. The bio-oil at a concentration of 15% showed an IO similar to the commercial acaricide amitraz ( Table 5).
Effects on the Ovary of Ticks
To analyze the interference of crambe fruit bio-oil in ovarian development, engorged females were treated with bio-oil at a concentration of 25% and dissected 24, 48, and 72 h after treatment. Females treated with bio-oil showed a reduction in GSI in all the periods evaluated, however, only the reduction in 72 h of exposure was significant (Figure 2). During dissection, it was observed that the ovaries of females in groups treated with crambe fruit bio-oil had small and whitish oocytes after 72 h, therefore, little developed (Figure 3). In addition, there are no mature oocytes in the oviduct, whereas in the ovary of the control group, its presence can be observed.
DISCUSSION
Liquid pyrolysis products have been the target of studies evaluating biocidal effects in different organisms. Bio-oils have a high yield and a rich chemical composition, which varies according to the biomass used and the conditions of the thermochemical process. Slow pyrolysis of crambe fruit gave rise to a high bio-oil yield. The general distribution of the yield of crambe fruit pyrolysis products was like that described in the literature (Silva et al., 2019).
Results of this study show that slow pyrolysis of crambe fruit produced a bio-oil with toxic effect in R. microplus. This is the first report demonstrating the effect of pyrolysis bio-oil as an acaricide. A previous study carried out by Lindqvist et al. (2009) (Acari: Tarsonemidae) and Tetranychus urticae (Acari: Tetranychidae) mites. However, the bio-oil was not toxic to mites. Most studies evaluating the effect of pyrolysis products on arthropods have been carried out with insects and this is an area of research that has not yet been fully explored (Mattos et al., 2019). Crambe fruit bio-oil caused high mortality of R. microplus larvae in 24 h, with an LC90 of 14.4%. Similarly, some bio-oils FIGURE 2 | Gonadosomatic index of Rhipicephalus microplus tick females exposed 24, 48, and 72 h to crambe fruit pyrolysis bio-oil (25% concentration). Tween 80 (5%) was used as a negative control. The experimental number was 45 females/treatment. The results are expressed as mean ± SD, **p < 0.05 (two-way ANOVA).
also showed the larvicidal effect on larvae of Colorado potato beetle Leptinotarsa decemlineata (Coleoptera: Chrysomelidae). Pyrolysis bio-oil from dried coffee grounds and tobacco-caused 100% mortality of CPB larvae in 48 h at a concentration of 50 and 100%, respectively (Booker et al., 2010;Bedmutha et al., 2011), concentrations higher than that found with the crambe bio-oil in R. microplus. These studies, different from ours, used fast pyrolysis at higher temperatures (400-600 • C). The fast pyrolysis process at temperatures above 500 • C, although generate a high yield of bio-oil (Bridgwater and Peacocke, 2000;Bridgwater, 2012), typically form polycyclic aromatic hydrocarbons (Simko, 2005;Mohan et al., 2006), toxic substances to humans and environment (EPA, 2008;Wu et al., 2011).
Crambe fruit bio-oil significantly interfered in tick reproduction, inhibiting egg development and egg-laying. At a lower concentration (10%), mortality was gradual, reaching 65.5% mortality only on the 15th day after treatment. However, on the fourth day, which would be the peak of egg-laying (Hitchcock, 1955), only 3% of the females had died, which does not explain the high inhibition of egg-laying observed. These results show that crambe fruit bio-oil, in addition to being toxic to females tick causing their death, also prevents them from laying eggs, contributing to stopping the tick cycle.
Thus, it was investigated if there was a metabolic interference, impairing the development of eggs or whether the organs associated with the egg-laying process (Genet's organ, for example) may have been affected. Treated R. microplus females were dissected to analyze the ovaries. Crambe fruit bio-oil significantly reduced the GSI by more than half after 72 h. Barbosa et al. (2016) also observed a strong decrease in GSI and the number of oocytes when R. microplus engorged females were treated with 3β-O-tigloylmelianol, a protolimonoid isolated from Guarea kunthiana (Meliaceae).
According to Balashov (1983), oogenesis can be divided into five stages. Phases III and IV of oogenesis consist of the period of formation of yolk granules by oocytes, and at the end of this period, the oocyte is ready for ovulation. During ovulation (stage V), the oocyte passes through the ovary lumen, is moved by peristaltic movements to the oviduct and, finally, to the vagina (Balashov, 1983;Denardi et al., 2004). The oocytes of R. microplus females treated with crambe fruit bio-oil were poorly developed. As the females were incubated after feeding and unformed eggs were seen in the ovary, there may be a change in stage III or IV of oogenesis in females treated with crambe fruit bio-oil. In addition, unlike the control group, there were no oocytes in the oviduct, indicating that the oocytes did not reach the last stage of development (V). Our group also evaluated ovaries of tick females treated with pyrolysis bio-oils from other biomasses, and there was no significant reduction in GSI (data not shown).
Bio-oils have a large number of organic compounds, and this chemical composition is dependent on raw material and the process conditions used. The acaricidal activity found may be associated with a synergistic effect between the various substances present in crambe fruit bio-oil. Phenolic compounds, fatty acids, and acetic acid have been found in liquid pyrolysis products from FIGURE 3 | Ovaries of engorged females treated with 5% tween 80 (negative control) (A) and females treated with the crambe fruit pyrolysis bio-oil (B) obtained after 72 h of treatment. Red arrow: oviduct; green arrow: poorly developed oocytes (increase: 20×). Bar = 1 cm. plant biomass with important insecticidal properties (Yatagai et al., 2002;Bedmutha et al., 2011;Kartal et al., 2011;Suqi et al., 2014;Cáceres et al., 2015). However, there are still few studies evaluating the effect of pyrolysis products on arthropods, and not all have evaluated the chemical composition (Mattos et al., 2019). Crambe fruit bio-oil exhibited many hydrocarbons in its hexane fraction. Hydrocarbons have already been found as major compounds in pyrolysis bio-oils of coffee grounds and macadamia nutshells, and these bio-oils were active against CPB and the termite Coptotermes formosanus, respectively (Yatagai et al., 2002;Bedmutha et al., 2011).
These interesting data show us that crambe fruit pyrolysis biooil is a promising acaricide against the cattle tick R. microplus. However, additional research must be carried out to investigate issues related to environmental safety, to develop a safe product for the population.
CONCLUSION
Slow pyrolysis of crambe fruits produces a bio-oil in high yield, containing several organic compounds being a good source for developing new biotechnological products. This is the first report of the use of pyrolysis bio-oil as an acaricide. The results of this study demonstrate the acaricidal effect of crambe fruit bio-oil on larvae and engorged females of R. microplus. In addition, bio-oil interferes with tick reproduction, inhibiting egg-laying. Crambe fruit bio-oil is, therefore, a potential alternative of sustainable product for control R. microplus.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The animal study was reviewed and approved by the Ethics Committee on Animal Experimentation of UFRGS and FEPAGRO, Brazil.
AUTHOR CONTRIBUTIONS
GA, MC, and EF contributed to the conception and design of the study. CM and JA organized the database. CM performed the statistical analysis and wrote the first draft of the manuscript. CM, JA, and NT performed tick experiments. BS performed pyrolysis conversion. BS, CM, MC, and GA performed the chemical analysis. CM and EF wrote the final manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
FUNDING
The authors are grateful to Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ), Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), and Programa de Fomento à Pesquisa -Universidade Federal Fluminense (FOPESQ-UFF) for their financial support to the present work. This work was also supported by FAPERJ Scholarship E-26/200.584/2021. | 5,525.8 | 2021-12-02T00:00:00.000 | [
"Agricultural And Food Sciences",
"Biology"
] |
STRUCTURAL ANALYSIS OF FUNCTIONALLY GRADED MATERIAL USING SIGMIOADAL AND POWER LAW
The stress-strain relations, displacement distribution, stress resultants and mid plane strain resultants of a functionally graded material plate are studied using Hamilton’s principle. A simply supported rectangular thick shell direct stress, inplane shear stress, transverse stress and displacement are investigated. The analysis and modeling of five layers FGM shell is carried out using MATLAB19 code with ABAQUS20 software. Using distinct materials on the top and bottom layers of the shell, a transverse uniform load in five degrees of freedom is applied with a specific Poisson's ratio and Young's modulus in a power and sigmoidal law function through the thickness direction. A power law was used to determine the distribution of properties through shell thickness. The results showed that the bottom layer affected significantly most stress due to subjected to the most in-plane stress while the displacement is greatest at the top layer.
INTRODUCTION
Today the world is turning largely to composite materials and there is almost no use or application devoid of it due to its high performance properties that exceed or compete with materials made of steel such as resistance, stiffness, thermal insulation and corrosion in addition to light weight. The mechanical properties of the functional graded materials (FGM) differ for each material element, where the change in those properties occurs along the thickness. The power-law function, sigmoidal law function and exponential law function used to define the functions of the FGM behavior where these laws describe the variety of the top surface Young's modulus and the stress Intensification in the interfaces. The properties of main components in FGM such as thermal, mechanical, magnetic, optical are varied according to the variety of chemistry or microstructure, thus the traditional technique not useful with these components and the smooth variance in the properties of these materials results from being microscopically inhomogeneous materials [1][2]. Also, the properties of gradient materials were characterized by high performance specifications such as high bonding strength, reduced stress concentration and resistance to high thermal loads, which made these materials the focus of attention, especially in advanced and vital industries such as the manufacture of reactors, optics and electronics, in addition to the uses of mechanical and medical engineering [3].
In engineering applications that are exposed to impact loads in addition to thermal loads, engineering structures must provide sufficient support to withstand those loads, making the observations directed towards specific materials that have the ability to withstand those loads together [4][5]. The advantage of the mixture of ceramic with low thermal conductivity, and other materials or a combination of other metals to improve the capability to withstand high-temperature gradient environments in addition to keeping their structural strength Also, the possibility of manufacturing these mixtures with a constantly varying volume fraction [6].
The integral form from the Refined Zigzag Theory (RZT) equilibrium equations by Peridynamic Differential Operator (PDDO) to achieve the perfect solution of the differential equations and considered this theory is more appropriate for thick as well as moderately thick plate stress analysis due to independent on the correction factors of shear and involve several of kinematic variables. The stress concentration was reducing at the interface core as well as face sheet due to use the functionally graded cores [7].
The nonlinear vibration of nanotube composite with single-walled functionally graded beam by nonlinear von Kármán geometric and theory of Timoshenko beam were discussed by other researchers [8] and the static and modal analysis DIAGNOSTYKA, Vol. 22, No. 4 (2021) Abd-Ali NK, Madeh AR.: Structural analysis of functionally graded material using sigmioadal and ..
60
were described for simply supported conditions of plate with function graded properties across thickness as well as volume fraction variance [9]. Specific studies reported the graduation of material properties in transverse and axially thickness direction according to power low to study the characteristic of dynamic analysis. A virtual work principle was used to derive the equation of motion and the model discretization was treated by finite element method [10]. Also, the axial stress analysis which effected by various types of functionally graded beam was evaluated under thermal environment [11].
The excellent properties of FGM as well as corrosion and erosion resistant in addition to thermal characteristics prompted researchers to study the use these components in free and force buckling analysis under different thermal effect [12]. A plate with a novel functionally graded with smoothly distribution of stress through the thickness to avoid interfacial failure of sandwich plate structure was discussed and the mechanical properties such as shear modules and Poisson's ratio were varied through thickness. the equations of motion were solved by using Ritz method with Chebyshev polynomials to obtain bending deflection, shear buckling load and stresses for simply supported and clamped plate [13]. Also, the NURBS functions were used to describe the displacement of element that depends on parameters of function graded material and the stiffness of the thick plate including extension-bending coupling [14].
Other studies evaluate nonlinear eigenvalue analysis of FGM nanocomposite with carbon nanotube as reinforcement agent with different distribution in direction of thickness depend on beam theory of Timoshenko and nonlinear theory of Karman, where the volume fraction, slenderness ratio and amplitude of vibration were discussed through free vibration state [15]. Other studies described the sigmoid-law in distribution of different material properties such as ceramic and metal through thickness of beam. Stiffness and buckling matrices were built by the finite element model and solved free vibration analysis numerically [16]. In addition, other researches investigate the dynamic response of functionally graded for axi-symmetric plate and cylinders under thermo-mechanical and finite element formulation under thermal load using theory of first order shear (FSDT) [17].
Present work focus on the mechanical properties of multilayer composite material, where The stressstrain relations strain displacement, stress resultants and mid plane strain resultants of a functionally graded material plate are studied using Hamilton's principle. A simply supported rectangular shell direct stress, inplane shear stress, transverse stress and displacement are investigated.
MATHEMATICAL FORMULATION
A distinctive material properties P is differed through the thickness of plate along the expressions of the power law [18]: (1) where Pt and Pb refer to the top and bottom plate faces property, respectively, and parameter n indicates the profile variation of the material along the thickness where at fully ceramic plate, the value of n equal to zero. Stress concentrations arise on one of the interfaces in which the material is continuous but varies rapidly when a FGM of a single powerlaw function is applied to the multi-layered laminated shell. To achieve a smooth distribution of stresses across all interfaces, the volume fraction was determined using two power-law functions.
A transverse force is applied to a linearly elastic, medium-thick, rectangular FGM shell. The thickness h of a medium-thick FGM shell is considered to be uniform, and the thickness h is in the range of 1/20 1/100 of its span. 1. Before and after deformation, line segments perpendicular to the middle surface remain unstretched and normal. 2. The FGM shell deflections are small in compared to its thickness h, therefore linear straindisplacement relationships are acceptable. 3. Because the thickness is considered to be in the range of 1/20 1/100 of its span, the normal stress in the thickness direction may be ignored. 4. The Young's modulus and Poisson's ratio of the non-homogeneous elastic FGM are functions of the spatial coordinate z. The FGM plate's deformations and stresses are based on the following assumptions: The principal equations of motion and the models of finite element developed for the classical plate theory and First-order theory are suitable for the multi-layers plates. The stiffness of plate were devoted by [19]: DIAGNOSTYKA, Vol. 22, No. 4 (2021) Abd-Ali NK, Madeh AR.: Structural analysis of functionally graded material using sigmioadal and ..
61
where the subscripts, c and m refer to the ceramic and metal, respectively, while the coefficient of thermal expansion and the modulus as well as the coefficients of elastic Qij differ through the thickness of plate. [8] Also, the strain and kinetic energies may be stated respectively as [20]: = ∭ (8) = 1 2 ∬ ℎ 2 (9) The four position for each element have 3DOF where the transverse displacement represented by w as well as θx, θy act x and y axis rotations respectively [21].
The matrices of stiffness and mass elements were developed on the minimum potential and kinetic energy principle [8]: and The Hamilton's principle was used to obtain the equation of motion for plate [22]: The constitutive law, also known as the generalized law of Hooke, establishes the desired connection by using the concept of linear elastic material behaviour [22]. = (14) The stresses and strains are linearly linked in this case thanks to the constitutive matrix C. Taking use of symmetry and taking strain energy into account. As demonstrated in equation (15), the anisotropic materials may be described using just 21 constants [28].
As explained in the basic material coordination system, the constitutive relation of orthotropic materials is simplified: In terms of engineering constants, the constants Q can be described by [29]
Finite Element Modeling Technique
A functionally graded material is a new technology used to create composite material that may be developed for use in high temperature layer and heat shield applications due to its superior mechanical and thermal characteristics.
The FGM properties will be change during the thickness and the numerical model consist of various layers in order to get this variation in properties as shown in the figures (1), (2) and (3). From the bottom surface, the material properties are evaluated with using the different laws of volume fraction distribution. Even though, the layered of structure does not show the material properties graduation, a adequate layers number can practically approximate the gradation of material peoperties. In this work, the analysis and modeling of FGM shell is carried out using ABAQUS software. ABAQUS proposes a more elements to select from for the gradient materials in the modeling. The FGM properties subjected to mechanical loads examined on a flat shell. Take into consideration a symmetrically, rectangular laminated thick shell with simply supported edges boundary conditions as shown in Fig. 4 and subjected to an uninformed pressure. Transverse shear deformations are taken into consideration, which may be essential if the shell is thick or involves layers with a low transverse shear modulus. The thick shell can be laminated cross-ply (symmetric or anti-symmetric) as well as angle-ply symmetrically with a large number of layers, or orthotropically. DIAGNOSTYKA, Vol. 22, No. 4 (2021) Abd-Ali NK, Madeh AR.: Structural analysis of functionally graded material using sigmioadal and ..
Results and Discussions
Stress concentrations arise on one of the interfaces where the material is continuous but varies significantly when a FGM of a single power-law function is applied to the multi-layered composite. To achieve a smooth distribution of stresses across all interfaces, the volume fraction was determined using two power-law functions. Consider an elastic rectangular plate and shell. The local coordinates x and y defined at the thick shell's edge, but the z-axis, which began at the shell's center surface, is really in the thickness direction. The top and lower surfaces have distinct material characteristics, such as Young's modulus and Poisson's ratio, also they are pre-defined based on the performance requirements. The Young's modulus and Poisson's ratio of the plates and shells, on the other hand, only vary uniformly in the thickness direction (z-axis), resulting in the following: Plate and shell made of functionally graded material (FGM).
The two power law functions are illustrated in the figures (5) and (6). The change of Modular ratio with sigmoid distributions is shown in Fig. 6, and this FGM shell is hence known as a sigmoid FGM shell (S-FGM). The stiffness of the S-FGM of thick shell reduces as the power law index increases, whereas the load vector increases as the variational parameter increases. The magnitude of deflection increases as the value of the power law index increases, and the magnitude of deflection increases as the value of the variational parameter increases.
Static Analysis
The numerical results for the combined large deflection of a simply supported functionally graded square shell in the figure (7) exposed to uniformly distributed pressure were provided in the static analysis, as illustrated in figure (8). The intensity of deflection increases as the power law index increases. The influence of transverse shear deformation may be to increase deflection, as predicted. When the thickness ratio is small, the differences in deflection values predicted by the current model are significant, but they become trivial as the side-to-thickness ratio increases. The stress coefficient in the z direction is 0 since the load was applied transversely. The magnitude of the stress coefficients in the x and y directions was determined to be the same. Shear stress was also determined to have the same magnitude in the xz and yz planes.
Figures (11) and (12) shows the distribution of maximum Von Misses and Tresca stress distribution. The maximum value of stress was found at the edge support due to the Von Misses stress was related to the bending moment which become maximum at this supports. There is a high level of agreement between the current and published results. The results demonstrate that the current formulation performs exceptionally well in terms of accuracy. (13), (14) and (15) show the variation of tensile stress (σx), (σy) and shear stress (σxy) respectively for specified boundary conditions of a square shell with uniformly distributed load for P-GM. From the extracted result of distribution, the variation of stress can be noted in the direction of stress coordinate for (σx) and (σy) and in the shear direction for the (σxy). Strain distribution is related to the strain energy, also called as deformation energy, is the potential energy contained in an object as strain and stress. The work done by the external force is transformed into energy stored in the solid throughout deformation process, which is known as elastic strain energy. The stored energy is known as deformation energy or strain energy when it is acted upon by an external force. Elastic deformation energy and plastic deformation energy are two types of deformation energy. The solid will release part of its energy and work when the external force and deformation are gradually reduced, and this part of the energy is elastic deformation energy.
Several works were focused on manufacturing process of composite materials with different properties and different engineering applications to investigate the mechanical behavior of these components and to improve its properties [23][24][25][26][27]. The present study looks forward to use new reinforcement materials with new layer arrangement to improve specific applications.
CONCLUSIONS
Some conclusions may be extracted from this work such as: 1. When a transverse load is supplied to a FGM, a bottom section of the shell receives significantly more stress than the top portion; consequently, it really is important to build the FGM with a high Young's modulus at the bottom to avoid fracture. 2. Mostly in case of in-plane stress, the part at the top is subjected to the most in-plane stress. In the FGM, the displacement is greatest at the top particle. 3. The sigmoidal law produces excellent results for the quality and improve of stresses. Both the theoretical equations and the FEM model provided significant and acceptable results for bending stress and bending strain. 4. Power law give smooth or uniform stress distribution through the thickness and can be used for high stress applications. 5. The maximum deformation capacity that satisfies the practicability is determined to achieve the maximum bearing capacity, based on the amount of deformation and stress energy of the shell. | 3,647 | 2021-11-24T00:00:00.000 | [
"Engineering"
] |
Experimental Evidence of PID E ff ect on CIGS Photovoltaic Modules
: As well known, potential induced degradation (PID) strongly decreases the performance of photovoltaic (PV) strings made of several crystalline silicon modules in hot and wet climates. In this paper, PID tests have been performed on commercial copper indium gallium selenide (CIGS) modules to investigate if this degradation may be remarkable also for CIGS technology. The tests have been conducted inside an environmental chamber where the temperature has been set to 85 ◦ C and the relative humidity to 85%. A negative potential of 1000 V has been applied to the PV modules in di ff erent configurations. The results demonstrate that there is a degradation a ff ecting the maximum power point and the fill factor of the current-voltage ( I - V ) curves. In fact, the measurement of the I - V curves at standard test condition show that all the parameters of the PV modules are influenced. This reveals that CIGS modules su ff er PID under high negative voltage: this degradation occurs by di ff erent mechanisms, such as shunting, observed only in electroluminescence images of modules tested with negative bias. After the stress test, PID is partially recovered by applying a positive voltage of 1000 V and measuring the performance recovery of the degraded modules. The leakage currents flowing during the PID test in the chamber are measured with both positive and negative voltages; this analysis indicates a correlation between leakage current and power losses in case of negative potential.
Introduction
Thin film photovoltaic (PV) modules in copper indium gallium diselenide (CIGS) are an excellent alternative to crystalline silicon (c-Si) modules in terms of cost and efficiency. For these characteristics, they have been consistently used worldwide in the past decade. PV modules are usually series-connected in PV strings in order to increase the system voltage; in Europe, the maximum direct current (DC) voltage currently allowed by the regulations is 1500 V for safety. When a point of the DC circuit of the PV system is grounded, a high electric potential difference between the solar cells and the frame of the modules can drive a mechanism known as potential induced degradation (PID). PID occurs in both crystalline silicon and thin film PV modules and can considerably compromise the performance of a PV system, especially if this operates at a high DC voltage [1]. PID can provoke catastrophic consequences in the operation of a PV system: in some cases, CIGS modules have essentially stopped functioning after bias application [2]. Thus, it is important to understand how the PID phenomenon works and how to prevent it.
The PID effect has received considerable attention; therefore, many researches were carried out on PID degradation occurring in crystalline silicon PV modules, which represent the most installed PV technology over the world. Hacke et al. [3] reported that a sample of polycrystalline silicon (p-Si) modules inside an environmental chamber, with a temperature of 85 • C and a relative humidity of 85% under a negative voltage of 600 V, exhibited a decay of 80% in maximum power after a short time of the test. Nagel et al. [4] reported that, after maintaining p-Si modules in the field under a negative voltage of 1000 V, they showed a power loss more than 50% within 25 weeks. Furthermore, Islam et al. [5] investigated the real PID of c-Si modules installed in a power plant: they reported an on-site power degradation of 46.5% after a duration of a negative voltage stress for nearly 11 years. Some authors reported a power loss range from 10% to 90% for various c-Si modules (including mono and poly-crystalline cells) after PID tests inside chamber [6]. PID has been also tested on monocrystalline modules: Oh et al. reported that p-base mono-crystalline silicon cells lost approximately 50% of their initial power at −600 V after just 44 h [7]. Goranti et al. [8] investigated that the remaining power of mono-crystalline modules stressed at 85 • C dry heat, with negative voltage of 600 V was about 15% compared to the pre-test power [8], as the c-Si modules are affected by PID; some authors reported that coating a TiO 2 thin film on the cover glass of modules can prevent them from PID [9].
The PID in thin film modules is mostly attributed to the diffusion of Na+ ions originating from the module glasses, via different mechanisms depending on the presence or absence of moisture, and is often manifested through delamination or corrosion of the transparent conducting oxide (TCO) layer of the module. This can cause a drastic reduction of the module performance [10][11][12][13]; Fjällström et al. [14] reported that PID effect can drive the efficiency of CIGS cells to drop to about zero.
As well known, the PID effect highly depends on the environmental conditions, especially temperature and relative humidity. If a PV module is subject to high potential (with the proper sign) under these conditions, a drastic degradation of the module performance can happen: the degradation seems to be correlated with the leakage current flowing between the module frame, usually grounded, and the active parts of the solar cells; this current crosses the encapsulant material of the module. It has been reported that the leakage current flowing through the glass and the encapsulant leads to an accumulation of trapped negative charges on the active layer [15]. There are different pathways for the leakage current to flow between the frame of the module and the solar cells. The pathway from the active layer to the frame through the front glass which has a high surface conductivity is the most dominant during the PID effect [16,17]. The low resistivity is mainly due to the encapsulant material: the soda-lime glass used in thin-film PV modules can be the source of sodium ions that, located at the front glass surface, might lead to a loss of the efficiency [18][19][20]. Even though the PID mechanism is not yet well understood, the sodium ions that migrate from the glass to the TCO seem to be one of the main reasons for the degradation [14]. The migration of the sodium ions causes a cell shunting that leads to a reduction of the module efficiency together with a deterioration of the main electrical characteristics [14,21]. It has also been demonstrated that PID degraded modules can recover from their condition by applying a positive bias [22]. The PID stress is related to the materials used to fabricate the modules and can be suppressed by using a sodium-free front cover glass, Yamaguchi et al. have reported that CIGS modules can be prevented from PID by using an ionomer encapsulant [16].
The purpose of the present work is to provide initial reference results about PID effect on commercial CIGS modules by a sample from a manufacturer. Manufacturing processes are constantly evolving; thus, the future modules will be able to withstand harsher environmental conditions without manifesting remarkable PID worsening, with obvious advantages in terms of reliability and life of the PV system.
In this work, tests have been performed on some commercial CIGS modules working in an environmental chamber and connected to both a negative and a positive high voltage. The current In this work, tests have been performed on some commercial CIGS modules working in an environmental chamber and connected to both a negative and a positive high voltage. The current voltage (I-V) curves and the main electrical parameters have been collected and investigated. Electroluminescence imaging has been used in order to analyse the non-visible degradation process. The leakage current flowing between the ground and the cells' active layer has also been measured. A final "recovery" test has been performed in order to bring the PV modules back to their initial conditions. The adopted methodology and the results will be used as a reference for researchers that will compare them with new results obtained on CIGS modules from different manufacturers.
Diagnostics Techniques for PID Detection
The I-V curve measurement and the electroluminescence (EL) test are two complementary diagnostics techniques. The I-V curve measurement allows a quantitative evaluation of the PV module performance, while the EL gives a qualitative state of health, with details about the causes of the failures.
Description of Diagnostics Techniques for Defects Detection
Regarding the determination of I-V curves, the dynamic methods consist of the use of capacitive or electronic loads [23], with an adequate and calibrated measurement system [24]. A typical scheme of the measurement system with capacitive load is shown in Figure 1; voltage, current, irradiance, and ambient temperature are detected simultaneously (the irradiance sensor and the thermometer for ambient temperature are not represented). Thanks to the capacitive load [25], the scan of the entire I-V curve can be easily performed at any level (module, string and array) up to hundreds of kilowatts. The elaboration of the I-V curve points permits to obtain all the electrical parameters, in particular the actual power output and the performance of the module. The only limitation of the I-V curve characterization is that it cannot easily provide information about the causes of the parameter deviations. In order to define the causes of underperformance, electroluminescence tests are even more used, thanks to higher accuracy and much lower costs than in the past [26].
The analysis of the EL images permits to justify the performance deviation with mechanical/chemical defects. The EL test starts with a forward bias of the PV module, obtained thanks to an appropriate DC power supply (Figure 2), in a totally shaded condition (e.g., in a dark room of a laboratory or on-field with low irradiance). The PV cells work like light emitting diodes (LEDs), in which their semiconductor materials have emission spectra in the infrared (IR) region of Thanks to the capacitive load [25], the scan of the entire I-V curve can be easily performed at any level (module, string and array) up to hundreds of kilowatts. The elaboration of the I-V curve points permits to obtain all the electrical parameters, in particular the actual power output and the performance of the module. The only limitation of the I-V curve characterization is that it cannot easily provide information about the causes of the parameter deviations. In order to define the causes of underperformance, electroluminescence tests are even more used, thanks to higher accuracy and much lower costs than in the past [26].
The analysis of the EL images permits to justify the performance deviation with mechanical/chemical defects. The EL test starts with a forward bias of the PV module, obtained thanks to an appropriate DC power supply (Figure 2), in a totally shaded condition (e.g., in a dark room of a laboratory or on-field with low irradiance). The PV cells work like light emitting diodes (LEDs), in which their semiconductor materials have emission spectra in the infrared (IR) region of the Energies 2020, 13, 537 4 of 16 electromagnetic spectrum and not in the visible region [27]. The image, captured by the sensor, is post processed to permit the identification of the defects.
Energies 2020, 13, x FOR PEER REVIEW 4 of 16 the electromagnetic spectrum and not in the visible region [27]. The image, captured by the sensor, is post processed to permit the identification of the defects. In case of c-Si silicon modules, the IR emission is in the range 900-1300 nm, and the peak, corresponding to the bandgap, is at 1150 nm. This emission can be partially detected by cheap silicon sensors, i.e., silicon charge-coupled device (CCD) or CMOS. On the other hand, the use of an expensive sensitive camera equipped with indium gallium arsenide (InGaAs) photodiodes is necessary for CIGS [28]. In case of CIGS generators, the emission curve can reach a wavelength of 1400 nm and it is better matched by the absorption curve of an InGaAs detector.
Detection of PID
The PID is easily identifiable in images obtained by EL. As an example, Figure 3 shows the EL image of c-Si modules installed on a rooftop. The working cells emit infrared radiation and are bright and homogenous (an example of two well working cells is highlighted by a white circle). On the contrary, the inactive parts of cells are dark. In the module in the centre of this Figure In case of c-Si silicon modules, the IR emission is in the range 900-1300 nm, and the peak, corresponding to the bandgap, is at 1150 nm. This emission can be partially detected by cheap silicon sensors, i.e., silicon charge-coupled device (CCD) or CMOS. On the other hand, the use of an expensive sensitive camera equipped with indium gallium arsenide (InGaAs) photodiodes is necessary for CIGS [28]. In case of CIGS generators, the emission curve can reach a wavelength of 1400 nm and it is better matched by the absorption curve of an InGaAs detector.
Detection of PID
The PID is easily identifiable in images obtained by EL. As an example, Figure 3 shows the EL image of c-Si modules installed on a rooftop. The working cells emit infrared radiation and are bright and homogenous (an example of two well working cells is highlighted by a white circle). On the contrary, the inactive parts of cells are dark. In the module in the centre of this Figure 3, there are 60 cells, each one with a size of 15.6 × 15.6 cm. About 40 cells are well working, while the rest (20) are defective.
Energies 2020, 13, x FOR PEER REVIEW 4 of 16 the electromagnetic spectrum and not in the visible region [27]. The image, captured by the sensor, is post processed to permit the identification of the defects. In case of c-Si silicon modules, the IR emission is in the range 900-1300 nm, and the peak, corresponding to the bandgap, is at 1150 nm. This emission can be partially detected by cheap silicon sensors, i.e., silicon charge-coupled device (CCD) or CMOS. On the other hand, the use of an expensive sensitive camera equipped with indium gallium arsenide (InGaAs) photodiodes is necessary for CIGS [28]. In case of CIGS generators, the emission curve can reach a wavelength of 1400 nm and it is better matched by the absorption curve of an InGaAs detector.
Detection of PID
The PID is easily identifiable in images obtained by EL. As an example, Figure 3 shows the EL image of c-Si modules installed on a rooftop. The working cells emit infrared radiation and are bright and homogenous (an example of two well working cells is highlighted by a white circle). On the contrary, the inactive parts of cells are dark. In the module in the centre of this Figure In order to make a comparison between PID and other defects, the module shown in Figure 3 is affected by both PID and cracks. Actually, this figure refers to modules installed on a rooftop, and cracks appear due to the walking on the PV modules mainly during maintenance [29]. The cracked cell is highlighted by a blue circle in Figure 3; the crack is easily identifiable because the black/inactive area has well-defined edges. Another aspect which supports in the identification of PID is the distribution of inactive cells along the PV strings. As well described in the next chapter, the PID occurs when a high negative voltage is applied between the cells (n-type layer in contact with the glass) and the metallic frame. Thus, supposing a floating voltage of the PV generator (in case of transformerless DC/AC converters), the most affected modules will be the ones at the end of the string (near the negative pole of the string), where there is a negative potential with respect to the ground ( Figure 4).
Energies 2020, 13, x FOR PEER REVIEW 5 of 16 in an intermediate situation, in which they are partially working. In order to make a comparison between PID and other defects, the module shown in Figure 3 is affected by both PID and cracks. Actually, this figure refers to modules installed on a rooftop, and cracks appear due to the walking on the PV modules mainly during maintenance [29]. The cracked cell is highlighted by a blue circle in Figure 3; the crack is easily identifiable because the black/inactive area has well-defined edges.
Another aspect which supports in the identification of PID is the distribution of inactive cells along the PV strings. As well described in the next chapter, the PID occurs when a high negative voltage is applied between the cells (n-type layer in contact with the glass) and the metallic frame. Thus, supposing a floating voltage of the PV generator (in case of transformerless DC/AC converters), the most affected modules will be the ones at the end of the string (near the negative pole of the string), where there is a negative potential with respect to the ground ( Figure 4).
Experimental Setup
Degradation Figure 5 represents the sketch of the PID test in the climatic chamber. In the present work, a commercial device made by ATT Angelantoni Test technologies is used. It has a test volume of 11 m³, the temperature range is from −60 to +100 °C and the relative humidity can be controlled between 10% and 95%. The tests have been performed in three different configurations, as follows. In the first configuration, the positive terminal of the DC power supply has been connected to the front surface of the PV modules (a metallic foil is placed on the surface to equalize the electric potential), while the negative terminal with high voltage of 1000 V has been connected to the PV terminals in short circuit.
In the second one, the only difference is that the positive terminal of the DC power supply has been connected to the back surface (by the same metallic foil). In the third one, the positive terminal of the DC power supply has been connected to the frame of the PV modules. The CIGS modules under study are four. the p-type (CIGS) is rear-side and n-type (CdS) is front-side. The frames of three modules, subject to artificial PID, have been grounded and one module without bias has been used as a reference, remaining at beginning of life.
Metallic frame connected to ground
Floating poles of the PV generator
Metallic frame connected to ground
Floating poles of the PV generator Figure 5 represents the sketch of the PID test in the climatic chamber. In the present work, a commercial device made by ATT Angelantoni Test technologies is used. It has a test volume of 11 m 3 , the temperature range is from −60 to +100 • C and the relative humidity can be controlled between 10% and 95%. The tests have been performed in three different configurations, as follows. In the first configuration, the positive terminal of the DC power supply has been connected to the front surface of the PV modules (a metallic foil is placed on the surface to equalize the electric potential), while the negative terminal with high voltage of 1000 V has been connected to the PV terminals in short circuit.
Degradation
In the second one, the only difference is that the positive terminal of the DC power supply has been connected to the back surface (by the same metallic foil). In the third one, the positive terminal of the DC power supply has been connected to the frame of the PV modules. The CIGS modules under study are four. the p-type (CIGS) is rear-side and n-type (CdS) is front-side. The frames of three modules, subject to artificial PID, have been grounded and one module without bias has been used as a reference, remaining at beginning of life. Firstly, the PV modules have been placed in an environmental chamber, where the temperature has been maintained at a constant value of 85 °C, and the relative humidity has been set to 85%. After the desired conditions have been reached, a negative voltage bias (1000 V) has been applied to the shorted leads of the modules, while their frames have been grounded. After a first cycle of about 20 h, the modules have been removed from the chamber and characterized using a flash tester, recording the I-V curves as well as calculating the electrical parameters (including short circuit current Isc and Voc). The flash tests have been performed at the standard test conditions (STC): irradiance of 1000 W/m 2 , corresponding to air mass 1.5, and cell temperature of 25 °C. STC conditions have been obtained in a dark room with temperature control and an artificial light. A pulsed solar simulator (PSS 8 from Berger) has been used; it consists of a Xenon flash tube; resistive loads are used to trace the I-V curves. According to IEC 60904-9 [30], the simulated spectrum AM 1.5 G, the spatial uniformity and the temporal stability outperform A class [31]. The measurements are acquired by a data acquisition system (12 bit-resolution) with typical uncertainty <0.5% for both voltage and current and 1% for power.
Afterwards, the modules have been placed again in the chamber to continue the PID test. The flash test measurements have been frequently repeated after about 20 h until the end of the test with a total duration of 120 h.
The leakage current flowing between the module connectors and the frame contact has been continuously recorded during the PID test in order to find a correlation between the leakage current and the degradation. Moreover, to visualize the affected regions, electroluminescence images of the modules have been acquired before and after the PID test.
Recovery
After the PID stress, the modules have been biased with a positive voltage of +1000 V, in order to investigate the capability of performance recovery. The test has been performed in the climatic chamber with the same temperature and relative humidity (85 °C/85% RH), with the same combinations of bias and connections used in the previous test. A test similar to PID has been performed on light soaked modules using the same configuration as the first test, the light soaking setup has been performed before the chamber testing, using a steady-state sun simulator for 2 h. In addition to the stressed modules, other two new modules have been biased with a positive voltage of 1000 V: they are used as a reference for the recovery test to compare the effect of voltage polarization on the performance. All the PV modules have been tested for a total duration of 120 h, and removed frequently after about 20 h of stress for the measurement of the I-V curves with the flash Firstly, the PV modules have been placed in an environmental chamber, where the temperature has been maintained at a constant value of 85 • C, and the relative humidity has been set to 85%. After the desired conditions have been reached, a negative voltage bias (1000 V) has been applied to the shorted leads of the modules, while their frames have been grounded. After a first cycle of about 20 h, the modules have been removed from the chamber and characterized using a flash tester, recording the I-V curves as well as calculating the electrical parameters (including short circuit current I sc and V oc ). The flash tests have been performed at the standard test conditions (STC): irradiance of 1000 W/m 2 , corresponding to air mass 1.5, and cell temperature of 25 • C. STC conditions have been obtained in a dark room with temperature control and an artificial light. A pulsed solar simulator (PSS 8 from Berger) has been used; it consists of a Xenon flash tube; resistive loads are used to trace the I-V curves. According to IEC 60904-9 [30], the simulated spectrum AM 1.5 G, the spatial uniformity and the temporal stability outperform A class [31]. The measurements are acquired by a data acquisition system (12 bit-resolution) with typical uncertainty <0.5% for both voltage and current and 1% for power.
Afterwards, the modules have been placed again in the chamber to continue the PID test. The flash test measurements have been frequently repeated after about 20 h until the end of the test with a total duration of 120 h.
The leakage current flowing between the module connectors and the frame contact has been continuously recorded during the PID test in order to find a correlation between the leakage current and the degradation. Moreover, to visualize the affected regions, electroluminescence images of the modules have been acquired before and after the PID test.
Recovery
After the PID stress, the modules have been biased with a positive voltage of +1000 V, in order to investigate the capability of performance recovery. The test has been performed in the climatic chamber with the same temperature and relative humidity (85 • C/85% RH), with the same combinations of bias and connections used in the previous test. A test similar to PID has been performed on light soaked modules using the same configuration as the first test, the light soaking setup has been performed before the chamber testing, using a steady-state sun simulator for 2 h. In addition to the stressed modules, other two new modules have been biased with a positive voltage of 1000 V: they are used as a reference for the recovery test to compare the effect of voltage polarization on the performance. All the PV modules have been tested for a total duration of 120 h, and removed frequently after about 20 h of stress for the measurement of the I-V curves with the flash tester. Furthermore, the leakage Energies 2020, 13, 537 7 of 16 current has been logged continuously for all the modules and the EL images have been taken before and after the tests inside the chamber.
Degradation
All the modules have been characterised at STC conditions. Figure 6 shows the I-V curves of three modules tested inside the climatic chamber with both negative (front contact of the first test) and positive (front contact of the second test) bias of 1000 V, and an unbiased module used as a reference. As shown in Figure 6, both the positive biased module and the unbiased module show similar behaviour with slight degradation, mainly due to the expected effect of high temperature and relative humidity inside the environmental chamber. On the contrary, the negative bias drives the sample to degrade significantly at the end of the test compared to the other samples. These results reveal that the high negative voltage induces PID degradation, in the CIGS modules under tests, with a remarkable power loss similar to the power losses in the crystalline silicon modules from different manufacturers. It is worth noting that the PID affected modules have not shown any visible discoloration.
images have been taken before and after the tests inside the chamber.
Degradation
All the modules have been characterised at STC conditions. Figure 6 shows the I-V curves of three modules tested inside the climatic chamber with both negative (front contact of the first test) and positive (front contact of the second test) bias of 1000 V, and an unbiased module used as a reference. As shown in Figure 6, both the positive biased module and the unbiased module show similar behaviour with slight degradation, mainly due to the expected effect of high temperature and relative humidity inside the environmental chamber. On the contrary, the negative bias drives the sample to degrade significantly at the end of the test compared to the other samples. These results reveal that the high negative voltage induces PID degradation, in the CIGS modules under tests, with a remarkable power loss similar to the power losses in the crystalline silicon modules from different manufacturers. It is worth noting that the PID affected modules have not shown any visible discoloration.
The changes in electrical parameters of the modules during the tests with positive bias have been investigated: these changes result linked with the applied bias and environmental conditions inside the climatic chamber. Figure 7 shows the normalized values (with respect to the initial values) of the maximum power, Pmax, and the Fill Factor FF = Pmax/(Isc •Voc) as a function of stress time inside the climatic chamber. Here, the connections are different: positive bias (1000 V) applied to the front contact, to the back contact, to the frame, and unbiased module. The changes in electrical parameters of the modules during the tests with positive bias have been investigated: these changes result linked with the applied bias and environmental conditions inside the climatic chamber. Figure 7 shows the normalized values (with respect to the initial values) of the maximum power, P max , and the Fill Factor FF = P max /(I sc ·V oc ) as a function of stress time inside the climatic chamber. Here, the connections are different: positive bias (1000 V) applied to the front contact, to the back contact, to the frame, and unbiased module. Before the PID tests, for all the modules (four) the measured maximum powers have been very similar (Pmax,r = 77 W) and the corresponding fill factor has been FFr = 68%, where the deviations are smaller than the uncertainty of the data acquisition system (±0.5% for both voltage and current, ±1% for power). In order to easily show the deviations, normalized values are used: they are calculated as the ratio between the measured value after the test and the initial value. In the first hours inside the chamber, all the modules have exhibited a noticeable degradation of Pmax and FF. Normalized Pmax has been reduced: the back contact and unbiased modules have rapidly highlighted a similar behaviour and decreased slowly down to 0.92 and 0.93 respectively, while the front and frame contacts have been degraded to 0.95 at the end of the test. Fill factor has shown a decrease to 0.98 with a similar behaviour for the front and frame contacts, while the FF for the back contact and unbiased modules has been degraded to 0.95. These results reveal that testing under harsh enviromental conditions and high voltage can have a slight effect on the modules performance, the damp-heat conditions(85 °C/85% RH) lead to losses in the FF of CIGS cells. The decrease of the FF can be associated with an increase of series resistance, that is mainly caused by an enhancement in the resistivity of the transparent conductive oxide (TCO) layers. In [32,33], it is confirmed that this is the cause of performance losses of PV modules in damp-heat conditions.
The PV modules with front and frame contact connections have shown less degradation, because the forward bias has maintained the PV modules at a stable condition; on the contrary, the unbiased module has shown an higher performance degradation [34]. Taking into account the measurement uncertainties (typical amount of ±1% for the current and the voltage values) arising during the flash testing, the evolution of the parameters can be attributed to metastabilities changes. Figure 8 shows the time evolution of the normalized parameters of the five modules under test with negative voltage of 1000 V. The results demonstrate that the PID effect is similar in the case of the tests with the back and the front contacts. After 120 h, even if Voc and Isc have remained constant, both the maximum power and the fill factor have substantially decreased with the stress time. Really, as shown in Figure 8, the average of Pmax and FF has been decreased by 35% at the end of the test. These degradations lead to a significant reduction in the efficiency of the modules and in their capability to produce energy. In the case of light soaked modules, Pmax and FF have decreased similarly to non-light soaked modules, with some changes arising during the test because of the metastabilities occurring with light soaking presenting in slight increase in the first hours and some unexpected degradation during the test. The FF of light soaked modules has shown less degradation at the end of the PID test comparing to no light-soaked modules. Before the PID tests, for all the modules (four) the measured maximum powers have been very similar (P max,r = 77 W) and the corresponding fill factor has been FF r = 68%, where the deviations are smaller than the uncertainty of the data acquisition system (±0.5% for both voltage and current, ±1% for power). In order to easily show the deviations, normalized values are used: they are calculated as the ratio between the measured value after the test and the initial value. In the first hours inside the chamber, all the modules have exhibited a noticeable degradation of P max and FF. Normalized P max has been reduced: the back contact and unbiased modules have rapidly highlighted a similar behaviour and decreased slowly down to 0.92 and 0.93 respectively, while the front and frame contacts have been degraded to 0.95 at the end of the test. Fill factor has shown a decrease to 0.98 with a similar behaviour for the front and frame contacts, while the FF for the back contact and unbiased modules has been degraded to 0.95. These results reveal that testing under harsh enviromental conditions and high voltage can have a slight effect on the modules performance, the damp-heat conditions(85 • C/85% RH) lead to losses in the FF of CIGS cells. The decrease of the FF can be associated with an increase of series resistance, that is mainly caused by an enhancement in the resistivity of the transparent conductive oxide (TCO) layers. In [32,33], it is confirmed that this is the cause of performance losses of PV modules in damp-heat conditions.
The PV modules with front and frame contact connections have shown less degradation, because the forward bias has maintained the PV modules at a stable condition; on the contrary, the unbiased module has shown an higher performance degradation [34]. Taking into account the measurement uncertainties (typical amount of ±1% for the current and the voltage values) arising during the flash testing, the evolution of the parameters can be attributed to metastabilities changes. Figure 8 shows the time evolution of the normalized parameters of the five modules under test with negative voltage of 1000 V. The results demonstrate that the PID effect is similar in the case of the tests with the back and the front contacts. After 120 h, even if V oc and I sc have remained constant, both the maximum power and the fill factor have substantially decreased with the stress time. Really, as shown in Figure 8, the average of P max and FF has been decreased by 35% at the end of the test. These degradations lead to a significant reduction in the efficiency of the modules and in their capability to produce energy. In the case of light soaked modules, P max and FF have decreased similarly to non-light soaked modules, with some changes arising during the test because of the metastabilities occurring with light soaking presenting in slight increase in the first hours and some unexpected degradation The exposure to 120 h of stress in the chamber has caused an increase of ≈12% of Voc in the modules with both front and back contacts, respectively. On the contrary, the frame contact module has decreased only by 3%. The Isc exhibits slight degradation, it has dropped by 3% and 6% for the front and back contact tests, respectively. The degradation can be affected also by other parameters, however, the data reveal that the degradation of the modules is associated to a large decrease in the shunt resistance Rp (about 80%) and to an even more dramatic increase of the series resistance Rs (by a factor of about 7), which affect the fill factor. It is evident that all electrical parameters of the frame contact module seem to be slightly changed after the PID test.
It is worth noting that the evolution of the electrical parameters is only marginally influenced by the preliminary light soaking. Conversely, a small increase of the fill factor and of the short circuit current is observed during the first hours. This is due to the metastabilities of the PV modules in CIGS [35,36] and does not prevent the degradation of the modules because of the strong PID effect. Another important indication is that the positive bias does not influence much the performance of the modules, while the negative bias leads to an extensive degradation of their performance The EL images of the four modules before and after the PID stress are displayed in Figure 9. As seen in the previous results of Figure 8, the negative bias drives for obvious degradation of electrical The exposure to 120 h of stress in the chamber has caused an increase of ≈12% of V oc in the modules with both front and back contacts, respectively. On the contrary, the frame contact module has decreased only by 3%. The I sc exhibits slight degradation, it has dropped by 3% and 6% for the front and back contact tests, respectively. The degradation can be affected also by other parameters, however, the data reveal that the degradation of the modules is associated to a large decrease in the shunt resistance R p (about 80%) and to an even more dramatic increase of the series resistance R s (by a factor of about 7), which affect the fill factor. It is evident that all electrical parameters of the frame contact module seem to be slightly changed after the PID test.
It is worth noting that the evolution of the electrical parameters is only marginally influenced by the preliminary light soaking. Conversely, a small increase of the fill factor and of the short circuit current is observed during the first hours. This is due to the metastabilities of the PV modules in CIGS [35,36] and does not prevent the degradation of the modules because of the strong PID effect.
Another important indication is that the positive bias does not influence much the performance of the modules, while the negative bias leads to an extensive degradation of their performance The EL images of the four modules before and after the PID stress are displayed in Figure 9. As seen in the previous results of Figure 8, the negative bias drives for obvious degradation of electrical parameters, while the PV modules have not shown any visible degradation nor any corrosion in the TCO layer. The EL images remarkably prove wide darkening areas, this consideration is applied to the front contact (Figure 9a) as well as for the back contact (Figure 9b), where a huge darkening area can be observed in the middle of the module, which is corresponding to the shunt resistance degradation highlighted in Figure 8, while the test with the frame shows a small degradation around the perimeter. Obviously, there are no changes in the EL images of the fourth module (Figure 9d) that was not stressed and has been kept new as a reference module. On the other hand, in case of crystalline silicon modules affected by PID, the inactive cells belong to the modules subject to a more negative potential (close to the negative pole of the PV string) and are located along the borders near the frame.
Energies 2020, 13, x FOR PEER REVIEW 10 of 16 parameters, while the PV modules have not shown any visible degradation nor any corrosion in the TCO layer. The EL images remarkably prove wide darkening areas, this consideration is applied to the front contact (Figure 9a) as well as for the back contact (Figure 9b), where a huge darkening area can be observed in the middle of the module, which is corresponding to the shunt resistance degradation highlighted in Figure 8, while the test with the frame shows a small degradation around the perimeter. Obviously, there are no changes in the EL images of the fourth module (Figure 9d) that was not stressed and has been kept new as a reference module. On the other hand, in case of crystalline silicon modules affected by PID, the inactive cells belong to the modules subject to a more negative potential (close to the negative pole of the PV string) and are located along the borders near the frame.
(a) (b) (c) (d) Figure 9. EL images of the tested modules at 85 °C and 85% RH, before (upper images) and after (lower images) the test for different contacts: (a) front, (b) back, (c) frame, and (d) unbiased.
The dark regions shown in the EL images, which are due to the shunting mechanisms occurring in the middle or near the edge of the modules because of the contact used to apply the voltage for the PID test. Therefore, for both front and back contacts, the current is concentrating in the middle of modules, whereas for frame contact the current is concentrating at the edge and affects less the module. This witnesses the extensive degradation of the modules together with the big changes observed for the electrical parameters.
The shunting mechanisms are mainly associated with the migration of the sodium ions (Na+) from the cover glass toward the active part of the solar cells and the CdS layer. In addition, the sodium ions are also located in the CIGS/Mo interface or in the back of modules and can also follow this path to migrate and accumulate in the active cell. The sodium ions move because of the applied voltage, and this happens more when the bias is negative [7,11]. This movement can be associated with a redistribution of charges which leads to an enhancement of the recombination effect in the depletion Figure 9. EL images of the tested modules at 85 • C and 85% RH, before (upper images) and after (lower images) the test for different contacts: (a) front, (b) back, (c) frame, and (d) unbiased.
The dark regions shown in the EL images, which are due to the shunting mechanisms occurring in the middle or near the edge of the modules because of the contact used to apply the voltage for the PID test. Therefore, for both front and back contacts, the current is concentrating in the middle of modules, whereas for frame contact the current is concentrating at the edge and affects less the module. This witnesses the extensive degradation of the modules together with the big changes observed for the electrical parameters.
The shunting mechanisms are mainly associated with the migration of the sodium ions (Na + ) from the cover glass toward the active part of the solar cells and the CdS layer. In addition, the sodium Energies 2020, 13, 537 11 of 16 ions are also located in the CIGS/Mo interface or in the back of modules and can also follow this path to migrate and accumulate in the active cell. The sodium ions move because of the applied voltage, and this happens more when the bias is negative [7,11]. This movement can be associated with a redistribution of charges which leads to an enhancement of the recombination effect in the depletion zone. This phenomenon, which might be the cause of the shunt resistance drop, certainly reduces the open circuit voltage and the short circuit current with a consequent loss of power.
On the other hand, the dark areas shown in the EL images are also related to the obvious increase of the series resistance shown in Figure 8, the luminescence emission is variable due to the sheet resistance of the TCO and the Mo back layer [37]. The dark areas represent parts of a module where there is a poor or even a zero current flow. The increased value of the series resistance is the cause of the degradation of the fill factor.
Recovery
As shown in the previous section, the critical environmental conditions together with the application of a high negative bias lead to a significant degradation of the main parameters of the modules and to a consistent efficiency loss. As the negative bias has been recognized to be the main cause of the degradation occurred during the test, a second test has been performed in order to show how an opposite bias could contribute to recover the modules. Table 1 shows the changes in the electrical parameters of the degraded modules (front, back, and frame contacts) after the first PID test. The PV modules have been placed in the environmental chamber at a temperature of 85 • C and a relative humidity of 85%; a positive voltage of 1000 V has been applied for 120 h. All the electrical parameters have improved after the test showing that applying a positive voltage gives the capability to recover partially the performance of the degraded modules. In particular, the effect of the recovery test is particularly more beneficial in the case of the bias applied to the front contact. For this configuration and for the back connection, at the end of the test the short circuit current and the open circuit voltage have recovered to a high degree. Conversely, the maximum power and the fill factor are both only the 75% of the initial values. The case of frame connection does not exhibit an improvement of performance after recovery. Figure 10 shows the EL images of the modules after the recovery test. The dark regions have become more luminescent confirming that the shunting mechanisms, occurred in the modules during the PID test, have been partially recovered. This is because the positive bias reverses the migration paths of the sodium ions which are attracted and accumulated in the active cells after the PID test.
Leakage Current Analysis
A third analysis has been carried out on the leakage current flowing during the PID test (voltage 1000 V, temperature 85 °C, relative humidity 85%) which increases the surface conductivity of the modules. The leakage currents known as the transferred charges from the active layer to the frame, are plotted in Figure 11; these leakage currents are related to the PID effect. The highest current has been observed for the front contact configuration and corresponds to high degradation as displayed in Figure 8. Furthermore, the back contact module shows less flow: as both front and back connections cause significant degradations, this reveals that the leakage current through the back contact is most harmful than the front contact, and the power loss needs less transferred charges [38]. The frame contact module shows the smallest leakage current which is related to a small degradation in the first test. These results could give a correlation between power loss and the leakage current flowing under high voltage during PID test. When the relative humidity is high, the wet surfaces of the modules become electrically conductive and thus a consistent leakage current can flow from the active solar cells to the grounded frame through the front and the back contact surfaces. This flow is facilitated by the low resistivity of the contacts which depends on the encapsulation material used in modules: as well known, the use
Leakage Current Analysis
A third analysis has been carried out on the leakage current flowing during the PID test (voltage 1000 V, temperature 85 • C, relative humidity 85%) which increases the surface conductivity of the modules. The leakage currents known as the transferred charges from the active layer to the frame, are plotted in Figure 11; these leakage currents are related to the PID effect. The highest current has been observed for the front contact configuration and corresponds to high degradation as displayed in Figure 8. Furthermore, the back contact module shows less flow: as both front and back connections cause significant degradations, this reveals that the leakage current through the back contact is most harmful than the front contact, and the power loss needs less transferred charges [38]. The frame contact module shows the smallest leakage current which is related to a small degradation in the first test. These results could give a correlation between power loss and the leakage current flowing under high voltage during PID test.
Leakage Current Analysis
A third analysis has been carried out on the leakage current flowing during the PID test (voltage 1000 V, temperature 85 °C, relative humidity 85%) which increases the surface conductivity of the modules. The leakage currents known as the transferred charges from the active layer to the frame, are plotted in Figure 11; these leakage currents are related to the PID effect. The highest current has been observed for the front contact configuration and corresponds to high degradation as displayed in Figure 8. Furthermore, the back contact module shows less flow: as both front and back connections cause significant degradations, this reveals that the leakage current through the back contact is most harmful than the front contact, and the power loss needs less transferred charges [38]. The frame contact module shows the smallest leakage current which is related to a small degradation in the first test. These results could give a correlation between power loss and the leakage current flowing under high voltage during PID test. When the relative humidity is high, the wet surfaces of the modules become electrically conductive and thus a consistent leakage current can flow from the active solar cells to the grounded frame through the front and the back contact surfaces. This flow is facilitated by the low resistivity of the contacts which depends on the encapsulation material used in modules: as well known, the use When the relative humidity is high, the wet surfaces of the modules become electrically conductive and thus a consistent leakage current can flow from the active solar cells to the grounded frame through the front and the back contact surfaces. This flow is facilitated by the low resistivity of the contacts which depends on the encapsulation material used in modules: as well known, the use of soda-lime glass in PV modules' technologies leads that sodium ions located at the front glass surface decrease their resistivity, and drive to significant leakage current to flow through this path. It is considered as a dominant path of leakage current, when a high voltage is applied. On the other hand, the back surface has more resistivity compared to the front glass explained by the small amount of Na. This can reveal for less leakage current to flow as obtained after measurements.
The obtained results after the PID test can be correlated with the magnitude of leakage current generated by bias in damp-heat conditions, and flowing between the ground and the active cells: it reveals that the more the PID degradation occurs on PV modules, the more leakage current to flow in the modules. This correlation can be considered as an indicator of PID effect on PV modules. Figure 12 depicts the results of leakage current flowing in the three modules after applying the positive bias. However, the analysis of leakage current during recovery process shows that similar magnitude of current to flow in the opposite direction, as during PID stress, where the leakage current flowing through the front side has decreased more than a half at the end of the test.
Energies 2020, 13, x FOR PEER REVIEW 13 of 16 of soda-lime glass in PV modules' technologies leads that sodium ions located at the front glass surface decrease their resistivity, and drive to significant leakage current to flow through this path. It is considered as a dominant path of leakage current, when a high voltage is applied. On the other hand, the back surface has more resistivity compared to the front glass explained by the small amount of Na. This can reveal for less leakage current to flow as obtained after measurements. The obtained results after the PID test can be correlated with the magnitude of leakage current generated by bias in damp-heat conditions, and flowing between the ground and the active cells: it reveals that the more the PID degradation occurs on PV modules, the more leakage current to flow in the modules. This correlation can be considered as an indicator of PID effect on PV modules. Figure 12 depicts the results of leakage current flowing in the three modules after applying the positive bias. However, the analysis of leakage current during recovery process shows that similar magnitude of current to flow in the opposite direction, as during PID stress, where the leakage current flowing through the front side has decreased more than a half at the end of the test. Summarizing, a correlation exists, in case of negative voltage, between the leakage current and the power loss, but a general relationship between the leakage current and the performance degradation cannot be defined.
Conclusions
In this work, PID accelerated tests have been performed on a group of commercial CIGS modules inside a damp-heat climatic chamber, comparing the effects due to PID with traditional crystalline technologies. The degradation behaviour of modules at the end of the test has been investigated and the modules have suffered from PID only under negative bias. The affected modules have exhibited performance degradation, mainly the maximum power and fill factor as well as other electrical parameters. However, the degraded modules showed no visible defects, while EL images have proved the degradation process at the cell level: a remarkable decrease in shunt resistances and an increase in series resistances have been detected. It is generally confirmed that the tested modules show to degrade highly by applying the high voltage on the back or front surface. The main cause of this mechanism can be associated with the migration and accumulation of Na ions on the active solar cells of the affected modules. For both negative bias and positive bias, a high level of leakage current flows between module surfaces and the active solar cells by applying high voltage. The measured leakage current can be suggested as a rate of the modules susceptibility to PID stress. Positive voltage has no PID effect on the tested modules, which can reveal that a correlation between the measured leakage current in case of negative bias and the power degradation can be confirmed. Applying the Summarizing, a correlation exists, in case of negative voltage, between the leakage current and the power loss, but a general relationship between the leakage current and the performance degradation cannot be defined.
Conclusions
In this work, PID accelerated tests have been performed on a group of commercial CIGS modules inside a damp-heat climatic chamber, comparing the effects due to PID with traditional crystalline technologies. The degradation behaviour of modules at the end of the test has been investigated and the modules have suffered from PID only under negative bias. The affected modules have exhibited performance degradation, mainly the maximum power and fill factor as well as other electrical parameters. However, the degraded modules showed no visible defects, while EL images have proved the degradation process at the cell level: a remarkable decrease in shunt resistances and an increase in series resistances have been detected. It is generally confirmed that the tested modules show to degrade highly by applying the high voltage on the back or front surface. The main cause of this mechanism can be associated with the migration and accumulation of Na ions on the active solar cells of the affected modules. For both negative bias and positive bias, a high level of leakage current flows between module surfaces and the active solar cells by applying high voltage. The measured leakage current can be suggested as a rate of the modules susceptibility to PID stress. Positive voltage has no PID effect on the tested modules, which can reveal that a correlation between the measured leakage current in case of negative bias and the power degradation can be confirmed. Applying the bias to the front side has been driven to partially recover the degraded performance of PID affected modules.
Future work will extend the measurement campaign to the modules of other manufacturers. Measurements will be performed on new modules in laboratory and field environments, to better quantify the performance variation over longer durations, and as a function of weather conditions. This work will permit us to quantify the new progress performed by the manufacturers developing the new typology of materials and coatings to reduce the flow of leakage currents, especially in the front glass and in the backsheet of the PV modules. | 13,084.4 | 2020-01-22T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
Enantio and Diastereoselective Addition of Phenylacetylene to Racemic α-chloroketones
In this report, we have presented the first diastereoselective addition of phenylacetylene to chiral racemic chloroketones. The addition is controlled by the reactivity of the chloroketones that allowed the stereoselective reaction to be performed at –20 °C. Chiral racemic chloroketones are used in the reaction. By carefully controlling the temperature and the reaction time we were able to isolate the corresponding products in moderate yields and with good, simple and predictable facial stereoselection. Our reaction is a rare example of the use of chiral ketones in an enantioselective alkynylation reaction and opens new perspectives for the formation of chiral quaternary stereocenters.
Introduction
The addition of carbon nucleophiles to reactive electrophilic functions, such as C=O and C=N double bonds, is a process of fundamental importance in the development of chemical synthesis [1,2]. Among the various nucleophilic species available, alkynes are excellent reagents for mild and selective C-C bond forming reactions [3][4][5]. In 2005, we found that mixtures of Me 2 Zn and acetylenes are able to promote the room temperature alkynylation of aldehydes, ketones and imines to furnish propargylic alcohols in good to excellent yields [6]. We have also developed an enantioselective addition of OPEN ACCESS phenylacetylene to ketones based on this concept [7]. Since our report, the enantioselective alkynylation of ketones using R 2 Zn as deprotonating agent in the presence of chiral ligands has been the subject of a number of interesting studies [8][9][10][11][12][13]. However, the diastereoselective and enantioselective addition of acetylides has never been investigated in the case of ketones. Recently, organocatalytic reactions have made possible the simple preparation of optically active α-chloroor α-bromoketones, useful starting materials for the preparation of densely functionalized building blocks [14]. However, these useful starting material are difficult to isolate and the subsequent reaction needs to be performed in situ. On the other hand, racemic α-haloketones are inexpensive and readily accessible reagents. If the addition of a nucleophile to a racemic haloketone is realized in the presence of a chiral catalyst in a stereoselective manner, highly densely functionalized building blocks containing a quaternary stereocenter could be prepared (Scheme 1).
Scheme 1.
Addition of a phenylacetylene to racemic chloroketones.
Herein, we report a successful realization of this concept using the Zn(Salen) promoted addition of acetylene to racemic α-chloroketones.
Results and Discussion
During our studies in the addition of phenylacetylene promoted by Me 2 Zn performed in the absence of air we found that α-chloroketones were particularly reactive substrates, and the reaction of 3-chloro-2-butanone with phenylacetylene in the presence of Me 2 Zn furnished the corresponding alcohol quantitatively, albeit as a mixture of diastereomers in 52:48 ratio (Scheme 1). In order to improve the diastereoselection we have investigated the reaction in the presence of different ligands 5-9. We have found that a good dr was obtained when the reaction was performed in the presence of the racemic Salen ligand 8. On the other hand, other Schiff bases were also able to increase the dr of the reaction.
The possibility to use α-chloroketones as substrates for the addition of nucleophiles takes advantage of their enhanced reactivity, and this opens new possibility for stereoselective addition of nucleophiles in organic synthesis [15]. As the reactivity of the chloroketones, compared to other ketones, is remarkable (even in the presence of the Salen ligands), we decided to investigate the reaction performing the addition of phenylacetylene in the presence of enantiopure Salen ligands. We have disclosed the first addition of phenylacetylene to ketones controlled by the Salen ligand [7], and other ligands able to promote the addition of alkyne to ketones were introduced by other groups [8][9][10][11][12][13]. The addition of phenylacetylene to silylketones promoted by Salen ligands was reported by Chan and Lu [16]. In all these reports it is clearly shown that the reactivity of the ketones is quite different compared to aldehydes and in all the systems reported, long reaction times and a high catalyst loading are required for good stereoselection. The simple stereoselection of the reaction was assigned as anti by chemical correlation with the diastereoisomeric epoxides 10 (Scheme 2). phenylacetylene (3 equiv.) were added to toluene in a flask under strictly anhydrous conditions, and stirred 10 min, then the ligand (20 mol%) was added and the reaction mixture and stirred for 5-10 min. Finally, the chloroketone (1 equiv.) was added. The reaction was stirred under completion (monitored by TLC, 16-24 h) and quenched with water. The dr was determined on the crude reaction mixture by 1 H-NMR. b The reaction was performed in the presence of Me 2 Zn without ligands. c The reaction was performed with lithium phenylacetylide, prepared by the addition of 1 equiv. of n-BuLi to 1.1 equiv. of phenylacetylene at 0 °C.
The mixture of the diasteroisomers was transformed to the epoxides by treatment with tBuOK in THF at -20 °C. The 13 C-and 1 H-NMR data of the mixture of isolated epoxides were compared with literature data [17]. It is worth adding that the reaction of the lithium acetylide to the chloroketone furnished the desired product with high stereoselectivity ( stereoselectivity (Table 1, entry 1), the the Zn(Salen) formed in situ controls the stereoselection through the complexation of the chloroketone to the zinc Lewis acidic center. The simple anti diastereoselection obtained in the case lithium phenylacetylene and with ligand mediated addition of zinc phenylacetylide can be rationalized by a Felkin-Anh transition state in which the nucleophilic alkyne attaches the carbonyl group in a conformation in which the chlorine is the larger group (Scheme 2, Figure A).
Scheme 2.
Assignment of the relative configuration of the diastereoisomers obtained by addition of phenylacetylene to a racemic chloroketone.
The preliminary results obtained with 1a gave us the indication that the reactivity of the chloroketone was very high, and that it was possible to reduce the temperature for the stereoselective reaction. It is noteworthy that the Salen catalyzed addition of phenylacetylene to ketone is performed at rt and no addition occurs with aliphatic or aromatic ketones at lower temperatures (0 °C or below). The increased reactivity of chloroketone thus allowed the investigation of the reaction at reduced temperature. Using commercially available racemic 3-chlorobutanone (1a) as a model substrate, we have performed the reaction at low temperature, in the presence of the (R,R)-Salen 8 and in Table 2 we report the results of this investigation. The reaction is stereoselective and the corresponding adducts can be isolated in high enantiomeric and good diastereoisomeric excesses, albeit in low to moderate yield. The enantiomeric excess obtained in the reaction is a function of the conversion. In fact, in order to keep the stereoselection very high it is important to stop the reaction after 60 hours at -20 °C; if the reaction is conducted at 0 °C, the adduct is isolated with a dr of 4:1 in favor of the anti diastereoisomer in good yield (60-70%) but in very low ee. If the reaction is not stopped at -20 °C after 60 h, the conversion is increased and the yield can be higher than 50%, but the facial stereoselection of the isolated diastereoisomers was quite low. Performing the reaction at increased temperature (>0 °C) the syn and anti stereoisomers are again isolated with good yield but in very low enantiomeric excess. This is straightforwardly explained by considering the reactivity of the chloroketone and the background reaction. At room temperature the Me 2 Zn promoted addition of phenylacetylene to ketone is occurring without the catalysis of the Zn(Salen) complexes [6]. The conversion and the isolated yield of the products can be improved at the expense of the enantiomeric excess. In order to increase the reaction tBuOK tBuOK rate of the reaction the excess of Me 2 Zn, and phenylacetylene was adjusted, and a compromise between reactivity and stereoselectivity was finally obtained. Similar reaction with aliphatic or aromatic ketones do not give any traces of product if they are conducted at −20 °C, even in the presence of the Salen ligand. The best conditions in the optimization were found when 4.5 equivalent of Me 2 Zn and 30 mol % of Salen were employed in the reaction. The Salen and the Me 2 Zn were mixed at rt for 1 hour before being to add the chloroketone at low temperature. The quantity of solvent was also important in order to enhance the enantiomeric excess. The reaction was performed without stirring, at −20 °C for 60 hours. Delighted by the results obtained with 3-chlorobutanone, we decided to investigate the generality of our reaction, studying other different chloroketones. The substrates were prepared using a methodology developed by De Kimpe [23], illustrated in Scheme 3. The corresponding ketones were prepared without difficulties in large scale, and were purified by distillation. The ketones 1b-f was used in the reaction with phenylacetylene promoted by Me 2 Zn, by using the conditions optimized for the substrate 1a and the results are reported in Table 3. Table 3. Stereoselective addition of phenylacetylene to a series of α-chloroketones promoted by (R,R)-Salen ligand. Generally, the yields of the reaction are quite moderate, because the reaction was stopped in order to obtain the highest enantiomeric excesses for the isolated adducts. The steric hindrance of the ketones does not seem significant in the controlling the enantiomeric excess. The reaction also works in the case of other haloketones, e.g. α.-fluoroketones, Scheme 4 Although was possible to perform the reaction with other haloketones, generally the results were inferior compared to the chloroketones. Concerning the possibility of using different substituted acetylenes in our reactions, we have briefly investigated the reaction employing alkyl and silyl substituted acetylenes under the general conditions developed for 3-chlorobutanone, and the data obtained are reported in Table 4. As is possible to evince by the data, the reactivity of the acetylenes 2b-d was quite lower compared to phenylacetylene, and products were isolated in low yield and with minor levels of stereoselectivity. We have established the absolute configuration of the products as indicated in Scheme 5. Enantiopure 4-phenyl-3-chloro-butan-2-one was prepared from the corresponding (S)-3-phenyl-2chloropropanal following the procedure published by De Kimpe [24]. The (S)-chloroaldehyde 13 was obtained in moderate yield from the aminoacid 12, and used without purification in the subsequent steps. (S)-chloroaldehyde 11 was treated with MeMgBr at 0 °C, to give the corresponding secondary alcohol that was directly oxidized with PCC to the ketone 1b. HPLC analysis performed on the chloroketone established that the (S)-chloroketone was obtained with poor enantiomeric excess of 20%, due to the racemization. Optically active chloroaldehydes have been used in the diastereoselective addition of organometallic reagents [25][26][27][28][29]. The addition of Grignard reagents to chloroaldehyde was reported to give good yield and moderate diastereoisomeric excess both in Et 2 O or in THF, and racemization of the aldehydes seems not to occur [30]. Perhaps, the racemization of our substrate is occurring during the oxidation step. Nevertheless, the enantiomeric excess obtained for the chloroketone 1b was sufficient to assign the absolute configuration of the products obtained in the alkynylation reactions. The reaction with the chloroketone 1b was performed in the presence of 20 mol% of a racemic mixture of (R,R) and (S,S)-Salen ligand (Scheme 5). AS in the case of the ketone 1a, the diastereoisomeric ratio of products 3ba and 4ba obtained was 4:1 in favor of the anti diastereoisomer. The configuration of the stereogenic centers for all the diastereoisomers was assigned by comparison of the HPLC traces obtained for the racemic and for the (S)-chloroketone. The absolute configuration of the major diastereoisomer obtained in the reaction with (R,R)-Salen and racemic 1b was 3S,4R. The absolute configuration for all the products was assigned by analogy, taking in consideration that for all the chloroketones the anti and syn products have similar HPLC retention times. Is worth adding that the absolute configuration obtained in the case of the chloroketone is opposite to that in the Salen mediated addition of phenylacetylene to aliphatic and aromatic ketones.
Entry a Ketone dr b T (h) ee anti
In the case of aliphatic and aromatic ketones the (R,R)-Salen is inducing the formation of a new stereogenic tertiary alcohol of (S) configuration [31]. However, the reaction conditions for the alkynylation reactions in the case of the chloroketones are completely different from those of the alkynylations of aromatic and aliphatic ketones. In addition, the presence of the stereogenic center of the chiral chloroketone can induce a preferential coordination of one enantiomer of the chloroketone as depicted in the model of Figure 1. In order to study the possibility of kinetic resolution, two enantiomeric chloroketones 1b were separated by chiral HPLC analysis. When the reaction was performed using 2 or more equivalents of chloroketone 1b, the excess of chloroketone was isolated after the reaction performed in the presence of (R,R)-Salen ligand and it was analyzed by chiral HPLC. The ketone 1b was isolated with no traces of enantioenrichment. In addition, when the optically active (S)-1b (20% ee) was treated with 2 equivalent of Me 2 Zn in toluene and the solution obtained was stirred at rt for two days, after quenching no reaction of Me 2 Zn with the choroketones took place. According to the HPLC analysis of the crude reaction mixture the (S) chloroketone undergoes no racemization in the presence of Me 2 Zn, as the enantiomeric excess of the chloroketones was unchanged. Therefore, the result obtained in the reaction of racemic chloroketones is not determined by a racemization of the chloroketones and the selective reaction of one enantiomer. The observed facial stereoselection could result from a preferential coordination of one enantiomer of the chloroketone to the Zn(Salen) complex, with this preferential stereoisomer reacting at a faster rate. It is worth adding that the analysis of the reaction is complicated by the fast background reaction, which has hampered any attempt to prove that the reaction was taking place via a kinetic resolution with preferential coordination of the (R) chloroketones. The fast background reaction is favored by excess of ketones, an excess that is necessary to investigate the kinetic resolution. For example, when the reaction was performed using 3 equiv of 1a in the presence 1 equiv of phenylacetylene and 1.4 equiv. of Me 2 Zn, the corresponding adducts were isolated with 15% ee for the anti stereoisomer. Performing careful analysis and selecting different reaction times and concentrations, we were not able to measure any enrichment of the starting chloroketones 1b. However, further studies are still necessary in order to explain and understand the results obtained in the stereoselective addition of phenylacetylene to haloketones, and work is in progress towards this objective. Phenyacetylene was purchased by Aldrich and used as received. 3-Chlorobutanone is commercially available and was used after distillation. The chloroketones 1b-1f were prepared according to the literature procedure described by De Kimpe [23], and were obtained in 16-20% yield (three reactions).
The fluoroketone 1g was obtained as reported in literature [32].
Addition of Alkynes to Chloroketones
General procedure: In a flask under nitrogen containing a solution of (R,R)Salen (0.075 mg, 0.135 mmol) in toluene (1 mL), a 2 M solution of Me 2 Zn is added under stirring (1 mL, 2.025 mmol). The mixture is stirred 10 min at rt, then phenylacetylene (0.15 mL, 1.35 mmol) was added. The mixture was stirred 1 h at rt, then the solution was cooled to -25 °C. Chloroketone (0.45 mmol) was added, and the mixture was kept at -20 °C without stirring for 60 h. The reaction was quenched with water at -20 °C, then the reaction was diluted with Et 2 O. The organic phase (yellow) was separated and the aqueous phase was extracted with Et 2 O. The organic phases were reunited, evaporated under reduce pressure and purified by chromatography. Chloro-3-methyl-1-phenylpent-1-yn-3-ol (3aa/4aa)
Conclusions
In conclusion, we have presented the first diastereoselective and enantioselective addition of phenylacetylene to chiral racemic chloroketones. The reaction provides access to highly functionalized products and the adducts were obtained in low yield and good stereoselection. The reactivity of chloroketones in the presence of chiral zinc catalyst can be explored with other zinc reagents taking advantage of the enhanced reactivity. Formation of quaternary and tertiary stereogenic centers from racemic chloroketones will be the subject of other studies from our laboratory. | 3,771.6 | 2011-06-01T00:00:00.000 | [
"Chemistry"
] |
Ontology-Based Probabilistic Estimation for Assessing Semantic Similarity of Land Use/Land Cover Classification Systems
To accurately and formally represent the historical trajectory and present the current situation of land use/land cover (LULC), numerous types of classification standards for LULC have been developed by different nations, institutes, organizations, etc.; however, these land cover classification systems and legends generate polysemy and ambiguity in integration and sharing. The approaches for dealing with semantic heterogeneity have been developed in terms of semantic similarity. Generally speaking, these approaches lack domain ontologies, which might be a significant barrier to implementing these approaches in terms of semantic similarity assessment. In this paper, we propose an ontological approach to assess the similarity of the domain of LULC classification systems and standards. We develop domain ontologies to explicitly define the descriptions and codes of different LULC classification systems and standards as semantic information, and formally organize this semantic information as rules for logical reasoning. Then, we utilize a Bayes algorithm to create a conditional probabilistic model for computing the semantic similarity of terms in two separate LULC land cover classification systems. The experiment shows that semantic similarity can be effectively measured by integrating a probabilistic model based on the content of ontology.
Introduction
Mapping land cover (LULC) provides important support for representing the historical trajectory and present situation of earth observation [1,2], land management [3], pattern analysis [4], settlement monitoring [5], landscape planning [6], etc. These LULC classification maps are available at multiple spatial and temporal scales generated by numerous types of classification standards for LULC. Currently, tens of LULC classification systems have been developed by different nations, institutes, and organizations, such as the NLCD1992 and the NLCD2006 developed by USGS (U.S. Geological Survey), the C-CAP developed by NOAA (National Oceanic and Atmospheric Administration), the Land Cover classification systems, legends developed by the UN (United Nations), and Chinese Current Land Use Classification.
These land cover classification systems and legends generate two significant challenges in integration and sharing: (1) polysemy: a land parcel might be defined as different LULC types by various LULC classification systems; (2) ambiguity: the same term of LULC might be defined differently according to various LULC classification systems. Polysemy and ambiguity belong to semantic heterogeneity [7], which focuses on addressing the confusion of expression in natural language processing. Li and Ling divided the semantic heterogeneity in terms of LULC classification systems and standards into three major factors [8]. (1) Confounding conflicts: the same definition or concept represents Land 2021, 10, 920 2 of 14 diverse meanings. For example, the notion "commercial/industrial" belongs to the category "Commercial/Industrial" "Transportation" in NLCD1992 but belongs to the category "Developed High Intensity" in NLCD2006. (2) Scaling and unit conflicts: the same definition is represented at different scales and units. For example, the term "Low Density" in NLCD1992 and NLCD2006 is defined differently. (3) Naming conflicts: one word has multiple meanings, or one meaning can be expressed by using multiple words. For example, the "perennial" of NLCD 2006 and "long-term" of NLCD 1992 represent the same meaning.
To address the semantic heterogeneities, a number of works have proposed approaches regarding semantic harmonization to integrate multi-source information and features into a consistent one. Since the psychological study shows that similar features can attract more attention than different ones [9], semantic harmonization mainly focuses on semantic similarity to deal with semantic heterogeneity. Some previous works have used metadata to define the characteristics of the relationship of LULC types; however, the work proposed by Comber, Fisher, and Wadsworth [10] claimed that the metadata could not explicitly describe the meaning of LULC information. To deal with this challenge, a number of semantic harmonization regarding LULC focuses on statistical learning-based semantic similarity assessment, such as conceptual spaces [11], semantic metrics [12], integrating post-classification and semantic metrics [13], regression integrated correlation matrix [14], etc. Moreover, the user-machine interactive approach [15] and expert-enhanced system [16] have been developed to facilitate understanding the semantics for assessing semantic similarity.
Assessing the semantic similarity of various LULC terms requires the consideration of the explicit meanings of domain knowledge and the hidden expressions/relationships between a term and its neighboring terms. Thus, the domain ontologies could be a significant barrier to the implementation of those approaches in terms of semantic similarity assessment. For example, although the statistical model performs well on measuring the similarity of "high intensity" between high-intensity residential (NLCD 2006) and highintensity developed (NLCD 1992), it cannot measure the relevance between developed and residential. The ontology can semantically define and formally represent the domain knowledge based on a hierarchical taxonomy, including classes, instances, attributes, and relationships. For semantic similarity measuring, previous works claimed that an ontology could systematically organize the domain knowledge and explicitly discover the relevance and correlations among domain individuals [17,18].
Until now, the state-of-the-art ontology-based semantic similarity assessment for language recognition and knowledge modeling consists of edge-based similarity measuring, feature-based similarity measuring, information content-based similarity measuring, and gloss-based similarity measuring [17,[19][20][21]. Edge-based approaches are simple and easy to compute, but they cannot satisfy the demand for precision and accuracy of semantic similarity measures. Moreover, although the IC-based approaches successfully handle many applications regarding semantic similarity measures, informativeness or content are difficult to obtain from the limited volume information of LULC classification systems and standards. When the features are inadequate, feature-based approaches cannot accurately distinguish the small difference. The implementation of the gloss-based similarity method requires massive text information stored in a word base such as WordNet and Wiktionary; however, to our knowledge, the word base is still unreported in terms of LULC land cover classification and mapping. Thus, gloss-based similarity measuring might not be appropriate for measuring the semantic similarity of LULC classification systems and standards.
To accurately assess the semantics similarity of LULC classification systems with a limited amount of text information, we propose an ontology-enhanced probabilistic approach to enhance the semantic similarity measuring regarding the domain of LULC classification systems and standards. The remainder of this paper is organized as follows: Section 2 discusses the works relevant to ontology-based semantic similarity assessment; Section 3 presents our proposed methods for measuring semantic similarity, which includes an on-tology named LuLcSys-Ontology for a formal representation of LULC, and a probabilistic model for semantic similarity based on LuLcSys-Ontology; Section 4 shows our semantic similarity assessment by using other approaches and our proposed one; Section 5 concludes our work, details our contributions to the literature, and predicts several prospective relevant research fields.
Edge-Based Similarity Measuring
Edge-based similarity measuring aims to calculate the links or depth between the terms in a conceptual hierarchy. The approach to compute the link and depth of a path is shown as follows: link = min(len(path(a, b))) depth(a) = min(len(path(a, r))) where path(a, b) are the set that includes all paths between two separate terms a and b, len(path(a, b)) is the set that includes the length of each path between a and b. r is the root of a hierarchical taxonomy that includes both a and b.
Other extensive works on edge-based similarity measuring include the approaches proposed by Li, Bandar, and McLean [22] and Al-Mubaid and Nguyen [23]. The edge-based similarity measure is straightforward and requires low-cost computing; however, it might be ineffective to deal with the semantic similarity assessment for a hierarchical taxonomy with a complex structure. Additionally, the path and depth of a term vary according to different ontologies, which means that this term might be measured as different. Finally, it cannot represent the hidden information in ontologies.
Information Content (IC)-Based Similarity Measuring
The IC-based similarity measuring assesses the semantic similarity based on the informativeness of the concept [24]. Assuming a concept as a, p(a) is the probability of observing this concept, the informativeness of this concept (IC(a)) is shown as follows: Resnik [24] and the following methods designed an approach to measure the semantic similarity between two concepts based on the informativeness, which is shown as follows: where a and b are two independent concepts, Sub (a, b) denotes the set of all concepts that contains concepts a and b. Depending on Equation (3), the subsequent studies on IC-based similarity measures include two focuses [19]: Corpora-based IC computation method and intrinsic IC computation method. The corpora-based IC computation method computes the content of IC by using external information. Otherwise, the intrinsic IC computation method focuses on utilizing the knowledge included in ontology is more popular. Related applications include measuring IC from a conceptual hierarchy with optimized depth calculation [25], measuring IC from a conceptual hierarchy without depth calculation [26], and measuring IC from a conceptual hierarchy via a setting weights mechanism [27,28].
In general, an IC-based similarity measure relies on massive well-prepared data to discover the heterogeneous meanings of each term. In comparison to the volume of training data from semantic bases such as WordNet, the number of terms in the stateof-the-art LULC classification systems and standards is inadequate for generating an accurate measuring result. Moreover, although intrinsic IC computation methods can derive knowledge from ontology without the support of massive external information, the hierarchical taxonomy in an ontology might be very complex for this method.
Feature-Based Similarity Measuring
Feature-based similarity measuring focuses on the similarity between the properties of two concepts, which is based on the set theory proposed by Tversky [29]: where d(a) and d(b) are the descriptions for concept a and b, respectively, µ is the weight, d(a)/d(b) denotes that the descriptions belong to a but not b, and d(b)/d(a) denotes that the descriptions belong to b but not a.
Since the hierarchical taxonomies in an ontology have been becoming more and more complex, the investigation on semantic similarity has concentrated on the similarity of features rather than of terms [30]. Rodriguez and Egenhofer [31] proposed a feature-based semantic similarity with regard to the relationships between terms.
where A and B are the corresponding set of terms a and b, respectively. sim s (), sim f (), and sim n () are the synsets, features, and neighbor concepts, and µ s , µ f , and µ n are the weights for these three concepts, respectively. More details of computing sim s (), sim f (), and sim n () can be found in Reference [31]. Other feature-based similarity measures include X-similarity [32], integrating information-theoretical domain [33], using taxonomical features [34], measuring similarity without pre-defined ontology [35], matching concepts from diverse ontologies [36], etc. Appropriate weighting refers to the most significant limitation of the feature-based similarity measure. In general, a feature-based similarity measure might assign an appropriate weight for each feature by a trial-and-error procedure. Moreover, a feature-based similarity measure assigns a weight for each independent term; however, the terms in various LULC classification systems and standards might have overlapped features, making it difficult to determine an appropriate weight.
LuLcSys-Ontology
Based on Protege software [37], we developed a domain ontology named LuLcSys-Ontology to semantically define and formally organize the information extracted from LULC classification systems and standards. Figure 1 illustrates the conceptual model of LuLcSys-Ontology, which includes five components: Classes, Instances, Properties, and Restrictions. Instances includes the individuals that belong to a class item defined in Classes. The items in Properties refer to relationships, and the items in Restrictions refer to the precondition and context of relationships. More details are provided as follows. The details of LuLcSys-Ontology are shown in Table 1.
Component Triple Relationship Content
Classes "subject" or "object" in the triple Three subclasses: Categories: the categories or classes of LULC classification systems and standards.
Codes: the codes corresponding to categories or classes.
Features: the characteristics of each category or class.
Instances
"subject" or "object" in the triple The terms or notations derived from the textural descriptions of LULC classification systems and standards.
Properties "predicate" in the triple Three types of properties: the details are shown in Table 2.
Restrictions "predicate" in the triple Restrictions define the validity of a property under specific conditions. Function terms "predicate" in the triple The characteristics of properties.
Moreover, each item in Instances should belong to at least one class in Classes. In LuLcSys-Ontology, properties are defined by the W3C Standards, including RDFS (Resource Description Framework Schema) and OWL (Ontology Web Language), and predefined by LuLcSys-Ontology. Since the Annotation property mainly represents the metainformation of ontology, we focus on data property-based triple (subject-data propertyobject), and the object property-based triple (subject-object property-object). In some cases, the data property-based triple might be incorporated into an object property-based triple. Table 2 lists the details of Properties, Restrictions, and Function terms. The properties that start with lulcsys:, rdf:, and owl:, show that this property is defined by LuLcSys-Ontology, RDFS, and OWL, respectively. The items of Restrictions and Function terms are defined by the W3C Semantic Web Standard. Based on the W3C Semantic Web Standard [38,39], all relationships in LuLcSys-Ontology were created as a triple relationship: "subject-predicate-object". Taking three categories of NCLD 2006 (Deciduous Forest, Evergreen Forest, and Mixed Forest) as the example, Figure 2 shows the transformation from the descriptions of these three categories into the semantic information of LuLcSys-Ontology. Figure 2A shows the descriptions of three categories involving Deciduous Forest, Evergreen Forest, and Mixed Forest. Figure 2B shows the semantics explicitly defined in LuLcSys-Ontology. We label various components in different colors. The orange texts refer to Classes, the italic black texts are Properties, the red texts are Property restrictions, the green texts are instances that are defined by Object properties, and the blue texts are the instances that are defined by Data properties. Based on these components, all descriptions are organized as triple relationships-as shown in Figure 2B. Moreover, Figure 3 shows the partial structure of the LuLcSys-Ontology developed for NLCD 1992, including three classes of NLCD_1992: Categories, Codes, and Features. The yellow rectangles refer to the subclasses of these three classes, and the purple rectangles refer to the instances. All properties are represented by arrow lines. When an arrow line connects two rectangles, the rectangle that connects to the starting point of the arrow line refers to the "object" in the triple relationship, and the other rectangle refers to the "subject" in the triple relationship.
Rules Building
In comparison to a spatial database, the key advantage of an ontology is the capability of discovering hidden knowledge through rule-based reasoning supported by triple relationships. In this paper, we built reasoning rules by SWRL (semantic web rules language) [39], which is defined by the W3C Semantic Web Standard. Assume the triple relationship (subject-predicate-object) in ontology as P(Sub, Obj), where Sub, P() and Obj denotes subject, property, and object, respectively. Additionally, Sub new , P new (), and Obj new respectively, denotes the new subject, property, and object after reasoning based on P(Sub, Obj). The basic structure of SWRL in this paper is as follows: Then, based on the data properties and object properties, we develop two types of rules: the rule of data property-based triple, and the rule of object property-based triple.
Assuming object property and data property as oP() and dP(), based on Equation (6), we have the rule based on object property-based triples and data property-based triples as follows: [oP 1 (?s 1 , ?o 11 ) ∩ dP 1 (?s 1 , ?o 12 where i is the total number of object property-based triples. oP new (?x new , ?y new ) denotes a new object property-based triple. According to Equation (6), this new triple is also the result of logical reasoning. We present an example of the reasoning based on Deciduous Forest in Figure 2 and in Table 3. Assuming we have a tree called "target_tree"; then, we have two data propertybased triples and two object-property-based triples: Table 3. Examples of semantic modeling for LULC classification systems and standards.
High-Intensity Residential Class in NLCD 1992 Developed High-Intensity Class in NLCD 2011/2006/2011
Constructed materials account for 80 to100 percent of the cover. Impervious surfaces account for 80% to 100% of the total cover.
Pr(S 2 |O 2 )
The probability of observing the coverage of impervious surfaces.
Pr(S 1 |(D 1 |O 1 )) The probability that the coverage is no less than 80%, when the coverage of constructed materials is observed.
The probability that the coverage is no less than 80%, when the coverage of impervious surfaces is observed.
Based on these three triples, we can deduce a hidden relationship being unsupported by a spatial database: target_tree rdf:isInstancceOf Deciduous Forest.
Probabilistic Reasoning Embedded Ontology-Based Semantic Similarity Measuring
As mentioned previously, feature-based measuring is limited to accurately weighting each feature without massive training samples. Thus, semantically modeling the features, rather than quantitatively weighting, would be an alternative solution. We integrate the probabilistic model (Bayes) and the feature-based measuring method to assess semantic similarity. Based on the object property-based triples and data property-based triples in LuLcSys-Ontology, we create the Bayes-based conditional probabilities to assess the semantic similarity.
For separate terms (subjects) S 1 and S 2 in two LULC classification systems and standards, we assume that the object property-based triple and data property-based triple of S 1 are P(S 1 , O 1 ) and P(S 1 , D 1 ), respectively. Similarly, for S 2 , we assume its object property-based triple and data property-based triple as P(S 2 , O 2 ) and P(S 2 , D 2 ). Moreover, the common features of objects and data between S 1 and S 2 are O c and D c , O c ⊆ O 1 ∩ O 2 and D c ⊆ D 1 ∩ D 2 . The semantic similarity of S 1 and S 2 (sim(S 1 , S 2 )) is measured by the following expression: In Equation (8), we transform the semantic similarity of S 1 and S 2 to the probability of observing that they are similar, which is denoted as Pr(S 1 , S 2 ). The similarity is measured based on their common features of object (O c ) and common features of data (D c ), which is represented by Pr(S c ). Pr(S c ) is obtained by the following expression.
In Equation (9), Pr(S c |O c ) refers to the probability of observing S 1 and S 2 are similar based on O c . Pr(S c |(D c |O c )) is the probability of observing S 1 and S 2 are similar based on D c . The following table shows an example that explains the parameters in Equation (9).
Experiments
The datasets for the experiment include three major regional LULC classification systems and standards: NLCD1992 and NLCD 2011/2006/2011 from USGS, and NOAA Regional Land Cover Classification Scheme from NOAA. The first experiment assesses the semantic similarity between NLCD 1992 and NLCD 2011/2006/2011. Considering that the difference between NLCD 2011/2006/2011 and NOAA Regional Land Cover Classification Scheme has attracted much attention, the second experiment focuses on assessing the semantic similarity of these two land cover classification systems and legends. The classes of these land cover classification systems and legends are listed in Table 4.
According to the categories and descriptions of NLCD 1992, NLCD 2011/2006/2011, and NOAA Regional Land Cover Classification Scheme, we develop three separate LuLcSys-Ontologies for these land cover classification systems and legends: NLCD92_Ontology for NLCD 1992, NLCD11_Ontology for NLCD 2011/2006/2011, and NOAA_Ontology for NOAA Regional Land Cover Classification Scheme. Then, we compute the semantic similarity based on the triples of each two ontologies: NLCD92_Ontology and NLCD11_Ontology, and NLCD11_Ontology and NOAA_Ontology. The computing method includes three existing ontology-based approaches: edge-based measures [23], feature-based measures [26], information content-based measures [25], and our proposed approach. Table 5 shows the result of the semantic similarity assessment between NLCD 1992 and NLCD 2011/2006/2011. By comparing the textural descriptions of these two LULC classification systems and standards, both polysemy and ambiguity can be observed. In other words, no two classes are exactly the same, although they are defined as the same term. Based on the path and depth of each two terms in ontologies, PDBM cannot effectively assess the semantic similarities between most of the classes in NLCD 1992 and NLCD 2001/2006/2011. Meanwhile, we can observe that information content-based measures (ICBM) cannot assess the semantic similarities of some classes in these two LULC classification systems and standards. When there exists a limited volume of common features between two classes, the informativeness of their seminaries is challenging to assess; however, ICBM performs well on distinguishing some small differences between the two classes. For example, although the four classes of NLCD 1992 involving Row Crops, Small Grains, Fallow, and Orchards/Vineyards/Other are similar to the class of NLCD 2001/2006/2011 named Cultivated Crops, the similarities between each of these four classes of NLCD 1992 and Cultivated Crops are different. ICBM can produce more accurate results than feature-based measures (FBM) in measuring this semantic similarity. Moreover, many results by FBM are closer to the results of our proposed approach; however, FBM struggles to assess the small differences between two classes. For example, the semantic similarity of Grasslands/Herbaceous and Sedge/Herbaceous is not the same as the semantic similarity of Grasslands/Herbaceous and Lichens and Moss, because Lichens and Moss are specifically defined for the landscape of Alaska; however, FBM produces the same similarity result. Thus, without the support of a conditional probabilistic model, ICBM and FBM are limited in measuring the semantic similarity of LULC classification systems and standards based on ontology. Table 6 shows the result of the semantic similarity assessment between NLCD 2001/2006/ 2011 and NOAA Regional Land Cover Classification Scheme. The results include both polysemy and ambiguity. PDBM cannot effectively assess the semantic similarities for a majority of classes between NLCD 2001/2006/2011 and NOAA Regional Land Cover Classification Scheme. Without a manual interpretation, ICBM seems to have challenges to deal with measuring the semantic similarities of some classes (e.g., Barren Land (Rock/Sand/Clay) and Barren Land) between these two LULC classification systems and standards. Moreover, although FRM overperforms ICBM, it still cannot recognize the hidden differences. For example, the semantic similarity assessment of Palustrine Emergent Wetland (Persistent) and Emergent Herbaceous Wetlands, and Estuarine Emergent Wetland (Persistent) and Emergent Herbaceous Wetlands requires discovering the hidden relationship among Palustrine, Estuarine, and Emergent; however, this hidden relationship might not be explicit without the domain knowledge semantically organized by the conceptual hierarchy of the ontology. As we can see from Tables 5 and 6, using previous ontology-based semantic similarity for LULC classification systems and standards, the performance of existing approaches is ranked as: FBM > ICBM > PBM; however, the weaknesses of each approach prevent them from producing an accurate result of semantic similarity. By incorporating probabilistic models into FBM, our proposed approach can more accurately measure semantic similarity.
The result of semantic similarity measuring could be useful for a number of applications. First, the changes of LULC have been a significant research focus of remote sensing and land planning. Due to the fact that LULC maps within different periods were generated by various LULC classification systems, the changes of LULC based on those maps might not be available. The similarity degrees among these LULC classification systems can facilitate people quantitatively analyzing the changes of LULC in a more accurate way. Moreover, LULC classification systems are generated based on specific LULC conditions of different areas, countries, or regions. The semantic similarity of LULC classification systems of different places represents the characteristics of these places in terms of LULC to some extent.
Conclusions
The emergence of multi-type LULC classification systems and standards facilitates the generation of LULC classification maps and digital products; however, the heterogeneities of diverse LULC classification systems and standards impact the efficiency of using these products in land monitoring, management, and utilization. To address the heterogeneities, ontology-based approaches have been commonly exploited by information science. This paper integrates probabilistic models and ontologies to facilitate measuring semantic similarity of different LULC classification systems and standards.
In this paper, we developed domain ontologies to explicitly define the descriptions and code of different LULC classification systems and standards as semantic information and rules for logic reasoning. Based on the semantics and rules, we applied the Bayes algorithm to create a conditional probabilistic model for computing the semantic similarity of LULC categories in separate LULC classification systems and standards. The experiment shows that semantic similarity can be effectively measured by integrating a probabilistic model based on the content of ontology.
There are several possible extensions of this research that focus on integrating the content of different LULC classification systems and standards. To explicitly represent the hidden semantic information, the fusion of various domain ontologies for LULC classification systems and standards still needs to be investigated. Moreover, since the nature of LULC information inherits geographical context, geo-referenced information would be an aspect of the semantic similarity measuring. Based on discussions of the feature-based approach and the IC-based approach, it might be useful to study integrating informativeness and features to assess the semantic similarity of LULC classification systems and standards. | 5,786.2 | 2021-08-31T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Geography"
] |
Characterizing the heterogeneity of tumor tissues from spatially resolved molecular measures
Background Tumor heterogeneity can manifest itself by sub-populations of cells having distinct phenotypic profiles expressed as diverse molecular, morphological and spatial distributions. This inherent heterogeneity poses challenges in terms of diagnosis, prognosis and efficient treatment. Consequently, tools and techniques are being developed to properly characterize and quantify tumor heterogeneity. Multiplexed immunofluorescence (MxIF) is one such technology that offers molecular insight into both inter-individual and intratumor heterogeneity. It enables the quantification of both the concentration and spatial distribution of 60+ proteins across a tissue section. Upon bioimage processing, protein expression data can be generated for each cell from a tissue field of view. Results The Multi-Omics Heterogeneity Analysis (MOHA) tool was developed to compute tissue heterogeneity metrics from MxIF spatially resolved tissue imaging data. This technique computes the molecular state of each cell in a sample based on a pathway or gene set. Spatial states are then computed based on the spatial arrangements of the cells as distinguished by their respective molecular states. MOHA computes tissue heterogeneity metrics from the distributions of these molecular and spatially defined states. A colorectal cancer cohort of approximately 700 subjects with MxIF data is presented to demonstrate the MOHA methodology. Within this dataset, statistically significant correlations were found between the intratumor AKT pathway state diversity and cancer stage and histological tumor grade. Furthermore, intratumor spatial diversity metrics were found to correlate with cancer recurrence. Conclusions MOHA provides a simple and robust approach to characterize molecular and spatial heterogeneity of tissues. Research projects that generate spatially resolved tissue imaging data can take full advantage of this useful technique. The MOHA algorithm is implemented as a freely available R script (see supplementary information).
Introduction
Tumor heterogeneity manifests itself in multiple ways in terms of observable features including tissue physiology, morphology, and histology, genotypes, gene expression, and protein expression [1,2,3,4,5]. The heterogeneity of these features can be studied at the inter-individual level [6,7] and at the intratumor level [8,9]. The inter-individual studies have typically relied on cell averaged, bulk tumor tissue measures. However, a full system-level characterization of tumor tissue heterogeneity is challenging and requires measures at the single cell level of a tissue.
Approaches to measure intratumor heterogeneity at the genomic level include computing allele fractions of the detected mutations from bulk tissue samples [10,11,12,13] or sequencing single cells [14,15]. A compromise between bulk tumor and single cell analysis is the isolation of smaller cell subpopulations by collecting samples from multiple tumor tissue regions or separating different types of cells into discrete tumor subsets by fluorescence-activated cell sorting [16,17]. The shortcoming of these approaches is that the in vivo cell spatial orientations, cellcell interactions, and cell spatial heterogeneity remain unknown.
Digital pathology offers cell level details of molecular characteristics together with their spatial distribution. Multiplexed immunofluorescence (MxIF) tissue imaging can now measure the spatial concentration distribution of 60+ proteins on the same tissue [18,19,20,21]. The idea that both the cell types and their spatial distributions are biologically relevant is not contested, yet the methods to jointly characterize the heterogeneity of these in tissues are limited and still being established [22,23,24,25]. Efforts have been made toward spatially mapping the tumor microenvironment and the location of the immune cells relative to the tumor [26,27,28,29]. A proliferative heterogeneity analysis involved hexagonal tiling on whole-slide digital images of breast tumor tissues was conducted to characterize the spatial distribution of Ki67 expression [30,31]. The computed entropy metric was found to be an independent prognostic indicator of overall survival in breast cancer patients. In another study, heterogeneity was assessed in a tissue microarray constructed by sampling multiple foci of breast carcinomas. The heterogeneity of the immunomarker expression was computed by comparing within subject variances to the overall variance for the biomarkers. Intratumor heterogeneity was confirmed for five of the seven markers, while the authors raise the issue of the problematic extrapolation of these findings from small biopsy specimens to the entire tumor [32]. Zhong and colleagues designed a high-throughput image-based computational workflow to quantitate and visualize FISH-based copy number alterations in spatial context [33]. Although it provides an intuitive visual map of spatial heterogeneity of genomic level alterations, this approach is restricted to evaluating one or two genes at a time and does not consider tissue morphology.
To advance the field of tumor heterogeneity characterization, we have developed the MOHA tool and method. The flow of information in the MOHA method is illustrated in Fig 1. The method combines single cell molecular measures from a tissue with pre-existing knowledge of biological pathways to assign states to cells in the tissue. It then incorporates positional measures of the cells to compute spatial state distributions. Tissue heterogeneity and diversity metrics are then computed from the observed distributions of these molecular and spatially defined states. Finally, these diversity metrics of the tissue are analyzed to gain biological insights. To demonstrate our MOHA method, we use MxIF imaging of tumor tissues from a colorectal cancer cohort. We show how the computed MOHA heterogeneity metrics correlate with cancer stage, histological tumor grade, and cancer recurrence.
Colorectal cancer cohort dataset
Tissue samples from colorectal cancer (CRC) patients were collected at the Clearview Cancer Institute of Huntsville, Alabama, and provided to GE Global Research by Clarient Inc. The deidentified samples were acquired per institutional guidelines. This tissue microarray imaging cohort consisted of 747 paraffin-embedded patient tumor core samples distributed across three slides. These samples underwent multiplexed immunofluorescence microscopy and the results and experimental details have been reported previously [21]. Upon processing the tissue imaging data, quality filtering steps reduced the number of CRC cohort subjects (i.e. tumor core samples) from 747 down to 692. Clinical information was provided for each subject, including the histological tumor grade, cancer stage, gender, age, chemotherapy treatment (yes/no), and follow-up monitoring of 10 years (medium follow-up of 4.1 years across patients). Tables with breakdown of samples by histological tumor grade, cancer stage, and cancer reoccurrence events during follow-up can be found in section 1 of S1 File.
MxIF tissue imaging data workflow
A detailed description of the multiplexed microscopy technique as well as the single-cell analysis and visualization methodology of biological features can be found elsewhere [21]. The minimum input data required by the MOHA algorithm is a plain tab-separated text file with one line per cell, specifying the following parameters: spatial (x, y) coordinates of the cell centroid, cell area, and the cell's biomarker ordinal values. The workflow steps required to obtain the MOHA input data from MxIF tissue imaging data are as follows.
Step 1) Segment cell objects and generate biomarker measures from tissue images. MxIF imaging data typically comprises multiple tissue samples that have been imaged at 20x magnification. One image captures an entire tumor core sample from a tissue microarray. Each image undergoes quality filtering followed by cell segmentation to generate biomarker measures (mean and median value) for each cell and sub-cellular location (cytosol, nuclear, membrane). DAPI staining is used to define the nuclear area. The plasma membrane is segmented using a combination of staining patterns corresponding to the membrane proteins Na +/K+-ATPase and pan-cadherin. Cells are assigned x and y coordinates of centroid location. A cell type label is computed for each cell to be within or outside a computed epithelial region mask. The epithelial region mask is generated from the staining pattern produced by pan-cytokeratin and/or E-cadherin antibodies. Only cells located within the epithelial region mask were used for this study.
Step 2) Filter segmented cell objects that do not meet morphological quality criteria. Segmenting a million cells from images can produce some artifacts. To prevent these artifact objects from being included in the analysis, morphological quality filters are applied. The quality filters that were applied in the CRC study required cell objects to have one or two nuclei, a minimum (1.4 um 2 ) and maximum (140 um 2 ) area for both the cell nuclear and cytosol area. Cell objects that were on the edge of each image (~2 microns) were removed from the analysis. Images of tumor samples with less than 100 cells that fulfilled all filtering criteria were removed from further analysis.
Step 3) Filter biomarker measures that do not meet quality staining round metrics. A biomarker measure for a cell was removed if the cell's quality round metric was below 0.8. The tissues underwent multiple rounds of staining, bleaching, and imaging which can lead to the deterioration of the tissue or other imaging artifacts. The quality round metric ranges from unity (perfect quality) to zero (total loss), and it is derived by computing the correlation of the DAPI stain intensity for the segmented cell portion of the image at a given round of staining to the baseline DAPI staining.
Step 4) Convert biomarker measures into ordinal values based on a n-state threshold model. The immunofluorescent intensity values for each biomarker (i.e. channel or staining round) integrated within each segmented cell and sub-cellular location (e.g. whole cell, cytosol, nuclear, membrane) were converted into ordinal values. For the CRC dataset, we selected a three-state threshold model, consisting of two threshold values established to bin the biomarker intensities into high, medium, and low states. The two threshold values were defined as the 33rd and 67th percentile of the sorted immunofluorescent intensities across the entire study. More biologically relevant threshold values could be established with control samples included in the multiplexed datasets to define normal and pathologically low or high values. In the absence of such controls, we split the data into comparable size bins. Threshold values were defined for each biomarker and for each sub-cellular location.
Molecular and spatial states of cells
We selected the cells of the tissue as the atomic unit to compute diversity metrics on. Multiple diversity metrics were computed, as detailed in the next two sections. If the metric only contains information on the proportions of cells in different molecular states, then it is designated as molecular entropy or molecular heterogeneity. If additional information is incorporated about the spatial distribution of the cell states relative to each other, the metric is then designated as spatial entropy or spatial heterogeneity. The spatial metrics can again be of multiple types, depending on how the spatial state information is defined.
Cartoon examples of molecular and spatial states (i.e. species) are presented in Fig 2. In this conceptional view, cells can express three unique molecular states. The cell family metric is defined by the number of surrounding cells expressing the same molecular state as the one examined (thick black border). Both cells evaluated under Cell Family have a group size or spatial state of two (i.e. 2 dimers). The cell neighbor metric characterizes the diversity in the number of neighbors of different molecular states that surround the cell examined. The central cell illustrated for the Cell Neighbor in Fig 2 has three associated cell neighbor states: a monomer, a dimer, and a trimer. The cell social metric captures the diversity in the sizes of cell social groups. The example of 19 cells presents three unique cell social spatial states: 6 monomers, 4 dimers, and 1 pentamer.
Molecular state diversity metrics
Selecting a pathway or gene set to define a cell's molecular state. The selection of pathways or gene sets is limited by the number and type of immunofluorescence measurements available in the study. Using the nomenclature that pathways are networks represented by nodes and edges, the number of "measurable nodes" in a pathway reflect how well the available biomarker data will represent it. Well-designed imaging studies typically select biomarkers representing key driver genes (i.e. pathway nodes) of the biological process or disease under study. The AKT signaling pathway map centered on Protein Kinase B (also known as AKT) with links to cell apoptosis, cell cycle, protein synthesis, and cancer processes was selected to demonstrate the MOHA methodology. This pathway is known to be relevant for cancer and many of its nodes were quantified in the dataset presented here. Any other relevant pathway or gene set with multiple measured nodes could be used. Gene sets representing the hallmarks of cancer [34,35] were also selected. The AKT pathway and cancer hallmark gene sets used in this study are described in section 1 in S1 File (Figure A, Tables A and B).
Computing a molecular state for each cell using a pathway or gene set. The state value of an entire pathway or gene set was defined as a concatenation of the state values for each individual measurable node in the pathway assembled in a specific order. Connectivity information provided in pathway maps are not directly used for computing the diversity metrics, a current limitation of the MOHA tool. Therefore, the specific order of the genes in the pathway state concatenation sequence is arbitrary. However, once a sequence order has been chosen, it must be maintained consistently throughout the study. For example, the version of the AKT pathway we used for the colon cancer dataset had 16 measurable nodes (Figure A in S1 File). Some of these measurable nodes represented a phosphorylated state of the protein (e.g. SER-9 of GSK3B) and required specific antibodies to detect and quantify them. Other measurable nodes were proteins restricted to specific subcellular compartments. For example, there were two nodes in the AKT pathway for the protein CTNNB1, restricted to either the cytosol or nuclear subcellular compartment. These two nodes are represented using the immunofluorescence measurements that were integrated within their respective subcellular regions of the cell. For a three-state threshold model, each measurable node can have a high, medium, or low state encoded with a 0, 1, or 2 ordinal value. Therefore, a possible state for the AKT pathway is 2122202222211222 and would be assigned to those cells with biomarker measures (i.e. ordinal state values) representing this 16-measurable node concatenated sequence.
Molecular entropy and heterogeneity diversity metrics. The Shannon diversity index, also called Shannon entropy, was used to characterize the diversity of the various molecular and spatial state distributions [36,37]. There are alternative mathematical formulations of diversity with some modifying the sensitivity of the computed diversity value for rare or abundant states. Without any rational reason or biological observation to select one vs. another, we decided to use the original Shannon index. In the context of molecular diversity, the Shannon diversity index is a measure of how evenly the cells of the tissues are distributed among the possible molecular states that those cells exhibit. The entropy is maximized when all possible states are observed with the same frequency and is minimized when all cells are in the same molecular state. The molecular entropy is calculated as: The Pmi is the fraction of cells in molecular state i, and Nm is the number of possible molecular states in the system. The maximum number of molecular states is defined by the maximum number of pathway states. This was computed based on the number of measurable nodes in the pathway and the number of node levels defined by the n-state threshold model. For our version of the AKT pathway with 16 measurable nodes, each having three levels, the maximum number of possible pathway states was 3^16, a little over 43 million. When the number of cells in the sample examined is smaller than the number of possible pathway states, the former is used as the maximum number of possible molecular pathway states.
There is no theoretical upper bound for the entropy value. For the sake of comparability of samples, it is sometimes convenient to use the normalized metric of heterogeneity, defined as the entropy divided by the natural log of the number of possible states. Heterogeneity values range from zero to unity.
Molecular Heterogeneity ¼ Molecular Entropy lnðNmÞ
Molecular disparity metric. The molecular entropy and heterogeneity metrics describe the molecular complexity of the system. Each molecular state is treated as a distinct specie. When defining a molecular pathway state based on the individual states of the measurable nodes of the pathway, there are some pathway states that are more similar than others. If two pathway states differ only in a single measureable node level, the molecular distance between the two pathway states is small. However, if every node has a different value, the distance between the two pathway states will be larger. Borrowing from the concepts of complexity and disparity in multi agent systems [38,39], we define a molecular disparity metric for a sample with the maximum number of molecular states Nm as: Pm i Pm j dði; jÞ 2 Pmi and Pmj are the fractions of cells in molecular states i and j, and d(i,j) is the molecular distance between states i and j. This distance is computed as the sum of differences across the measurable nodes: where Mn,i and Mn,j are the values assigned to pathway node n in pathway states i and j, and Npn is the number of measurable pathway nodes.
Spatial state diversity metrics
Defining cell neighbors for spatial metrics. Identifying neighboring cells is necessary for computing the spatial diversity metrics. We have used two different approaches to achieve this: an exact pixel-based method and a faster approximate method. Using the segmented tissue images, it is possible to represent the edge of each cell by a set of pixel points. When deciding if two cells are spatially first neighbors (i.e. touching cells), the edge pixel points from the two cells can be compared seeking for the condition in which the distance between an edge pixel point from one cell is within one-pixel distance of an edge pixel point from another cell. This comparison of pixel points is considered the exact method. Alternatively, an approximate method was implemented to be computationally twice as fast and not require repeated image processing. The cells were approximated by circles, and the distance between their centers had to be smaller than a critical parameter multiplied by the sum of their radii. This was defined as: where the Euclidean distance between the centers of two cells (xi, yi) and (xj, yj) is computed and normalized by the sum of the approximate radii of the two cells (ri and rj). The cell radii were computed from the segmented area of the cells (Ai, Aj), approximating the cells on the 2D images as circles. If this normalized Euclidean distance is equal to or less than the dimensionless critical parameter, d critical , the cells i and j are the classified as touching neighbors.
To establish the value of the dimensionless critical parameter, d critical , the approximate method was compared to the exact method for over 752 million unique cell pairs from the colorectal cancer data set. The change in the number of correctly and falsely identified touching cell neighbors as a function of the critical parameter was computed. A critical parameter of 1.31 minimized the number of false predictions, resulting in the best agreement between the approximate and exact methods. This resulted in the approximate method having a positive predictive value of 0.884 and the negative predictive value of 0.997. This means that when the approximate method indicated that two cells are touching, there was an 88.4% probability that the two cells in the image were touching each other. The spatial diversity metrics for the AKT pathway were calculated using both the exact and the approximate method. The diversity metrics computed by the approximate method correlated with those computed by the exact method with correlation coefficients that ranged from 0.98 to 0.998. Plots of these correlations are presented in section 2 in S1 File.
Cell coordination number diversity metric. The coordination number of a cell represents the number of cells surrounding and touching it (i.e. neighbors), as defined in the previous section. In a regular two-dimensional grid arrangement (i.e. lattice) of cells, the coordination number for each cell is the same, except for those at the edge of the lattice. This is not the case for a biological tissue where the coordination numbers differ from one cell to the other. A tissue will have a characteristic frequency distribution of cell coordination numbers. An entropy metric for the cell coordination number distribution can be computed using the Shannon diversity index. The cell coordination number entropy metric did not include any molecular state information, and can therefore be considered a pure spatial diversity metric. Alternatively, the molecular states of the cells and their immediate neighbors can be used to define various diversity metrics that include molecular information in addition to spatial context. Three spatial diversity metrics are presented below: Cell Family, Cell Neighbor, and Cell Social. The entropy values for these three-spatial metrics were computed using the Shannon diversity index. The difference between them comes from the definition of the individual spatial states and the maximum number of possible states.
Cell family diversity metric. The cell family state metric was computed by surveying the neighbors of each cell and counting only the number of neighbors in the same molecular state. This number of neighbors represents the cell family state. Having no neighbors in the same molecular state is a valid cell family state. Therefore, the cell family state can range from zero to the maximum number of neighbors a cell has. After going through every cell and their touching neighbors, a frequency distribution was established for these cell family states. The cell family entropy was then computed as: Ps k lnðPs k Þ where, Psk is the frequency of state k, and Zmax is the maximum number of cell family states. For this diversity metric, Zmax equals the maximum number of neighbors a cell might have in the tissue image, which is the same as the maximum coordination number. The cell family heterogeneity was computed by dividing its entropy by the natural log of Zmax + 1.
Cell neighbor diversity metric. The cell neighbor spatial metric characterizes the diversity in the molecular states of a cell's neighborhood. Whereas the cell family metric gives rise to a single state for each cell (number of same molecular state neighbors), the cell neighbor metric defines as many states around each cell as the number of different molecular states that are present in its neighborhood. For example, in Fig 2, Cell Neighbor has a central cell surrounded by cells in three different states, resulting in three cell neighbor states of 1, 2 and 3 shown as one circle, two squares and three triangles.
Cell social diversity metric. The cell social spatial metric characterizes the diversity in the sizes of cell social groups. Each grou;p is composed of cells that express the same molecular state and are spatially linked. The social group size is the number of cells in the group. Each cell in the group must touch at least one other cell in the group. The group of cells may be spread out or clumped together (Fig 2). After assigning each cell to a social group by the molecular and spatial constraints just described, the cell social frequency distribution can be computed. As before, the Shannon index was used to compute the entropy from the frequency distribution. The cell social heterogeneity was obtained by dividing the cell social entropy by the natural log of the maximum number of states.
The maximum number of cell social states, Ns, that is theoretically possible is dependent upon the total number of cells, Nc, present in the system. Each cell social state is a group of cells of a unique number. Summing over all possible cell social states will compute the minimum number of cells required to observe all those states. This is mathematically represented as: Solving the inequality leads to the formula for computing the maximum number of possible cell social spatial states, Ns, for a system with Nc cells: Random sampling method to decouple cell molecular states from cell locations. Knowing that cells communicate with each-other, it is reasonable to expect the spatial distribution of the molecular states among the cells of a tissue to be non-random. The spatial diversity metrics reflect this deviation from randomness. We applied two methods to assess the interaction between the cell molecular states and their relative spatial orientations. The first method was a random sampling method and the second was a probability based method. The random sampling method required generating randomized arrangements of the cell molecular states among the cell locations, followed by computing the spatial diversity metrics. This process was repeated 120 times for each sample to generate a distribution for each spatial metric.
Probability based method to compute cell family diversity metric. An alternative, probability based method was employed that computed an estimate of the mean of the spatial diversity metric upon randomizing the arrangements of the cell molecular states among the cell locations. The method computed all possible configurations based on the molecular distribution (Pm) for each cell and its respective cell coordination number. For a cell family group size of k, the number of configurations Ns k is computed as: Pm i is the fractions of cells in molecular states i and Zj is the coordination number of cell j that is being evaluated. The frequency of occurrence, Ps k , for cell family state k, is obtained after normalization: With the cell family state frequency distribution, Ps k , defined, the cell family diversity is then simply computed from cell family entropy equation shown above.
The key parameters for computing the molecular and spatial diversity metrics is summarized in Table 1.
MOHA diversity metrics capture tissue cell molecular states and their spatial arrangements
We first computed the diversity metrics for each tumor core sample from the CRC dataset and then compared these computed metrics with the tissue images. Four tissue image examples are presented in Fig 3 (labeled A-D) along with a plot of their molecular and cell family heterogeneity metric values in context of the entire CRC cohort. Although the molecular and spatial diversity values showed a significant correlation, there were samples, such as B and C, that displayed rather different spatial diversity despite their almost identical molecular heterogeneity and vice-versa (A and B, or C and D). There was a general trend of increasing molecular heterogeneity and decreasing cell family heterogeneity with higher cancer stage.
Similar trends were observed between the molecular and the cell neighbor and cell social spatial diversity metrics (Fig 4C and 4D). Both the cell neighbor and the cell social heterogeneity inversely correlated with the molecular heterogeneity, but their values covered a smaller range than the cell family metric. The molecular disparity correlated highly with the molecular heterogeneity (Fig 4A), indicating that the distances between the pathway states of the cells had a similar distribution across the samples examined, with increasing disparity and complexity as tumor grade increases.
The high correlation between the molecular and spatial diversity metrics indicates that the molecular states of the cells are the major source of diversity. To probe how much additional information the topology of the tissue can add to the spatial diversity metric, two decoupling approaches were used. First, the spatial arrangement of the cells was randomized, while keeping the molecular state profiles the same. The average cell family versus the molecular heterogeneity values for these "decoupled" synthetic cases with random cell arrangements are shown with grey cross symbols in Fig 4B. The difference between the random and real values is an Characterizing the heterogeneity of tumor tissues indication that the arrangement of cells in real tissues (tissue topology) relative to their molecular states is indeed not random. For the second approach, we took the molecular pathways state distributions of the four samples shown in Fig 3A-3D), and computed the cell family heterogeneity metrics for model tissues with mean coordination numbers ranging from 1 to 8 using a probability based method described in detail in the Methods section. The results shown in Fig 4E reveal that the molecular state distribution taken from four different samples significantly influenced the absolute value of cell family heterogeneity metric. The cell coordination numbers, which reflect the topology of the tissue, had a smaller influence. Fig 4E illustrates that increasing coordination numbers result in higher cell family heterogeneity values.
Diversity metrics correlate with cancer stage and tumor grade
To assess if our diversity metrics have captured relevant biology, we performed a correlation analysis between the diversity metrics and the clinical measures of cancer stage and histological tumor grade. Utilizing the CRC cohort dataset, we computed the Spearman's rank correlation between the diversity metrics and the subject's cancer stage or tumor grade measures. A highlight of the results is presented in Table 2 and Fig 5 for the AKT pathway diversity metrics with cancer stage and tumor grade for 670 subjects. Refer to worksheet A and B in S2 File and section 3 in S1 File for a complete set of correlation results and plots that include all the cancer Characterizing the heterogeneity of tumor tissues hallmark gene sets. We observed strong and statistically significant correlations (pvalues < 1E-5) between the molecular and spatial diversity metrics for both cancer stage and tumor grade. Overall, the correlations were stronger for cancer stage than for tumor grade. As Characterizing the heterogeneity of tumor tissues noted before, the molecular heterogeneity was found to increase with cancer stage and tumor grade, while the spatial heterogeneity metrics showed the opposite trend (Fig 4B-4D). This was found to be the case for each of the cancer hallmark gene sets (see sections 4 and 5 in S1 File).
Diversity metrics correlate with cancer recurrence
The Spearman's rank correlation between several diversity metrics and cancer recurrence for the CRC cohort subjects who received chemotherapy are shown in Table 3. The table presents the Spearman's rank correlation of diversity metrics with a cancer recurrence event during follow-up for 338 subjects that received chemotherapy and subset of 102 cancer stage 2 subjects Characterizing the heterogeneity of tumor tissues with histological grade 2 tumor tissues. The correlation of stage and grade with recurrence was computed using multiple linear regression. Worksheet C in S2 File provides a comprehensive overview of correlations. The cell family heterogeneity metric computed based on the cancer hallmark Inducing Angiogenesis had the highest correlation with cancer recurrence. Interestingly, this diversity metric did not correlate as strongly to cancer stage (r = -0.25) as the AKT pathway metric (r = -0.37). The mean cell family heterogeneity was found to be lower for those with a recurrence event (Fig 6A). We observed trends in subsets of subjects based on stage and/or grade that are missing when examining the entire cohort. The Spearman's rank correlations for all subjects who underwent chemotherapy and the stage 2 grade 2 subset are presented in Table 3. The molecular diversity indices show approximately the same correlations with recurrence (~0.16) for the entire chemo-treated group and those limited to stage 2 tumor grade 2 cases. In contrast, for the cell family heterogeneity, the correlation improved (-0.17 to -0.29). Although we have highlighted the cell family heterogeneity metric computed for the Inducing Angiogenesis cancer hallmark, the same trends are observed with less statistical significance for other spatial metrics and cancer hallmarks (section 6 in S1 File). The Cell Coordination Number Entropy showed the same trend (Fig 6B). This spatial metric correlation of -0.25 for the stage 2 grade 2 cases is zero for the chemo treated group that includes all cancer stages and tumor grades ( Table 3). These results suggest that within the stage 2 grade 2 cases, there are spatial features differentiating the subjects in terms of cancer recurrence.
MOHA tool implementation
The MOHA algorithm has been implemented in R. The freely available R scripts include the capability to compute the diversity metrics on the CRC dataset presented here. In addition, functions have been included to generate plots and data tables. The README document from the package contains a step-by-step guide to running the R scripts.
Discussion
The main goal of our work was to develop an approach for the quantitative characterization of tumor heterogeneity from spatially resolved molecular measures of cells. Although we utilized a MxIF tissue imaging dataset to demonstrate our method, the MOHA algorithm can be applied to any dataset that provides spatially resolved molecular measures. We strived to maintain some level of physical and biological rationale for the diversity metrics and to enable intuitive customization of the metrics to best address the biological questions asked. Consequently, the molecular metrics focused on the cell as the atomic unit of the tissue. The molecular state of the cell was defined based on signaling pathways and biological based gene sets depending on their relevance for the specific problem (i.e. the biological question) being examined. The underlying physiological phenomenon behind the use of touching cell neighbors for computing the spatial metrics can be primarily attributed to adjacent cell communication through juxtacrine signaling. Two other mechanisms of cell communication include synaptic or paracrine signaling. The distance at which paracrine signaling typically occurs in vivo is uncertain but is much more likely to effectively occur between two cells in in close spatial proximity of each other. Our methodology is general enough to emphasis paracrine signaling over juxtacrine signaling by changing the dimensionless critical parameter in our approximate method to determine if two cells are neighbors. We tuned the critical parameter to identify cells that were in direct contact with each other based on tissue imaging data. However, this parameter could be increased to identify cells as neighbors over greater spatial distances to model paracrine signaling occurring over longer distances. The cell coordination number reflects tissue topology including epithelial cell polarization, basal luminal organization of glandular structures, and the epithelial-mesenchymal transition in cancer.
Despite their roots in intuitive biological and physical characteristics of the tissue samples, these metrics are statistical in nature. To be meaningful and representative, the metrics should be computed based on many cells. For this study, we used tissue microarrays with core diameters of approximately 0.7-0.8 μm and selected the empirical cutoff of 100 cells in a sample at minimum to pass the first quality filter. This was a realistic compromise to have a non-trivial number of cells without disqualifying too many samples.
The discriminative power of the cell family, cell neighbor, and cell social spatial metrics toward cancer stage and tumor grade can be attributed to the combined inclusion of molecular and spatial information from the tumor tissue. Being able to compute metrics based on each of these factors separately makes it possible to quantify their contributions independently. Similarly, the gene sets or pathways providing the greatest correlation to cancer stage and tumor grade can potentially indicate which biological process are driving the progression of the specific cancer type studied.
We examined the correlation of the diversity metrics with clinical characteristics for the entire cohort, and separately for stage 2 and grade 2 subjects. Especially at this intermediate stage, CRC is heterogeneous with multiple treatment options and various tumor responses that can lead to multiple outcomes. It is critically important to be able to stratify CRC patients at this intermediate stage to identify the more aggressive phenotype and enable the selection of a more aggressive therapy to improve long-term survival. Perineural invasion or the expression of certain non-coding RNA's are promising prognostic biomarkers, but no final conclusion has been reached about their value [40,41,42]. Tumor heterogeneity metrics could provide additional prognostic factors computed from pathology tissue or biopsy samples.
They could potentially provide additional discriminatory power in existing multivariant prognostic models to improve their sensitivity and specificity.
We have shown that both the molecular and the spatial diversity metrics correlate with tumor stage and grade across the entire cohort to varying degrees. The cell coordination number, a purely spatial metric, showed an interesting behavior (Figure C in S1 File). Its average value increased with stage in grade 1 tumors, then it stayed relatively unchanged for grade 2, followed by a decreasing trend with stage for grade 3 tumors. This observation led us to speculate about the ability of the cell coordination number to reflect the longitudinal changes in tumor spatial structure. Starting from the healthy gland, through a more compact structure with increasing number of cell neighbors, to a collapsed, irregular structure in which tumor cells become more isolated with fewer cell neighbors.
We have found the MOHA methodology to be generalizable to other types of cancer including prostate, breast, lung and brain. This is not to say that the tumor heterogeneity is the same across all these cancers but that the physical and biological rationale for our diversity metrics are. For example, the cell coordination number reflects the close spatial proximity of cells to each other that is critical to juxtacrine, paracrine, and synaptic signaling mechanisms. Consequentially, the MOHA spatial metrics are defined to imbed the underlying biological phenomenon (e.g. cell-cell communication) that is common across all tissues. The actual interpretation of the spatial metrics between healthy and the various stages of a disease will be tissue and disease specific (e.g. epithelial-mesenchymal transition in cancer). Our methodology is also general from the perspective of working with any gene sets or pathways to define the molecular state of cells. However, those that provide the greatest insights and discriminatory power will likely be specific to the biological processes driving the progression of the disease under study.
Beyond computing the diversity metrics across the entire sample, this methodology can be used to examine the clonal composition of the tumor. It enables the identification of subsets of cells and their relative locations within a tumor contributing to correlations with clinical metrics across cohorts. This is a direction worth exploring in the immediate future.
Limitations of the MOHA metrics
There are no generally accepted methods available for characterizing tissue heterogeneity at the cell level, and the number of metrics one can propose could be very large. Testing all of them is neither feasible nor practical. Our metrics are based on the Shannon diversity index, a well-known statistical metric frequently used in other scientific areas. What makes these metrics tissue specific and potentially relevant is the choice of the state definitions. While we attempted to make them biologically intuitive, these definitions are exploratory in nature. We designed them in a way that makes it relatively straightforward to modify them. Further testing on multiple datasets and tissue types will enable us to judge how to select and modify them to maximize their usefulness.
Perhaps the most serious limitation of these heterogeneity metrics originates from the semi-quantitative nature of the immunofluorescence intensities used to characterize biomarker levels in the tissue. Ideally, every biomarker specific and fluorescently labeled antibody should have a set of standards allowing the user to calibrate the intensity measurements and relate them to true protein concentrations. In addition, every slide of tissue microarrays should have multiple normal controls to establish disease-relevant ranges for the biomarkers measured. Unfortunately, none of these are routinely available for existing clinical datasets. The methodology of computing the MOHA metrics will not change when such standards and controls become a reality, but their predictive power is expected to improve.
Although we selected measured nodes in the AKT pathway, The MOHA metrics include no information about the pathway connectivity or the directionality of the interactions. Ways to incorporate such information into the heterogeneity metrics is a direction worth exploring. Finally, it is worth mentioning that the spatial metrics proposed here can be somewhat tissuedependent which limits the ability to draw general conclusions about the relationship between spatial heterogeneity and tumor progression across all cancers.
Supporting information S1 File. Supplementary documents. This pdf file (3.3 MB) contains seven supplementary document sections of text and figures. Section 1 presents additional information on methods, the AKT pathway, gene sets and the CRC cohort. Section 2 presents the change in the number of correct and incorrect cell neighbor assignments (True Positives, False Positive, False Negatives) by comparing the assignment of the approximate method of computing cell neighbors relative to the exact method using the cell's segmented image pixels. Assignments broken out by cancer stage and tumor grade. Figures showing the correlations between the diversity metrics as calculated with the exact method of cell neighbor identification versus the approximate method. Section 3 presents box charts with diversity metrics by cancer stage and tumor grade. Section 4 presents plots with molecular disparity, cell family, cell neighbor, and cell social heterogeneity versus molecular heterogeneity computed across 7 gene sets corresponding to cancer hallmarks and the AKT pathway. The values are colored by cancer stage. Section 5 presents plots with molecular disparity, cell family, cell neighbor, and cell social heterogeneity versus molecular heterogeneity computed across 7 gene sets corresponding to cancer hallmarks and the AKT pathway. The values are colored by cancer grade. Section 6 presents box charts with diversity metrics by chemotherapy treatment and recurrence calculated based on 7 gene sets corresponding to cancer hallmarks and the AKT pathway. Box charts of average cell coordination number, number of cells and age at diagnosis broken down by treatment and recurrence are also included. Section 7 presents the frequency distributions of cell coordination numbers and by cancer stage and tumor grade. | 9,560.6 | 2017-11-30T00:00:00.000 | [
"Biology"
] |
A Study of Noise & Development of Traffic Noise Annoyance Models
Although noise annoyance is a major public health problem in urban areas, noise problem is still a great challenge for both public and transportation planners. The quantitative study of traffic noise and its relationship with annoyance & traffic volume was discussed in the paper and at the same time we tried to develop new statistical regression models to relate them. In the present study we also tried to fit different regression models such as Log-Linear, Linear, Log-Log Linear and Quadratic over noise data and decided which model fitted the best by using mathematics of principle of maxima & minima. After the identification of best fit curve we use this to fit our data. The aim of the study was to assess the predictive value of various factors on noise annoyance in noisy and quiet urban streets of New Delhi, capital of India.
Introduction
The World Health Organization defines the community noise (or environmental noise) as noise emitted from all sources except the industrial work places.Main sources of community noise include heavy road traffic.Investigations in different countries in different studies in the past several decades have shown that the noise has adverse effects on health [1]- [3].Noise, which is often referred to as unwanted sound, is characterized by the frequency, periodicity intensity, and duration of sound.Noise annoyance is a feeling of displeasure-irritation or disturbance, and gives a negative effect on community or individual [4].The term "annoyance" is a core concept in the area of environmental effects, but its meaning varies considerably among experts [5].Noise is one of the most important factors in producing deterioration of both well being and quality of life (QoF) of people in urban areas.Noise produces a series of physiological, psychological, behavioural changes in responses [3].
Various researchers have found out in their research that annoyance is very much related with noise levels.Vallet M. et al. [6] revealed that annoyance was related to measured noise levels for the people living along the expressways.Heavy lorries are found to constitute major sources of annoyance, particularly during the evening.For residents in bungalows, noise levels need to be somewhat lower.Ohrstrom E. et al. [7] investigated the acute annoyance reaction to different noise sources (lorries, aircraft, mopeds and trains) in a laboratory experiment.The results demonstrated that L eq demonstrated the best correlation with noise annoyance.However, traffic noise due to lorry is found to be less disturbing than aircraft noise at the same L eq value.Jakovljevic Branko et al. [8] determined principal factors for high noise annoyance in an adult urban population and assessed their predictive value.Noise annoyance was estimated using self-reported annoyance scale.Noise annoyance showed strong correlation with noise levels, personal characteristics and some housing conditions.Paunović Katarina et al. [9] conducted a study to assess the predictive value of various factors on noise annoyance in noisy and quiet urban streets.A cross-sectional study is performed on 1954 adult residents (768 men and 1186 women), aged 18 -80 years.Noise annoyance has been estimated using a self-report five-graded scale.In noisy streets, the relevant predictors of high annoyance are the orientation of living room/bedroom toward the street, noise annoyance at workplace, and noise sensitivity.Thancanamootoo S. [10]
Noise Annoyance
Noise annoyance is defined as an emotional and attitudinal reaction from a person exposed to noise in a given context.
Equivalent Noise Level (Leq)
L eq represents the equivalent energy sound level of a steady state and invariable sound.It includes both intensity and length of all sounds occurring during a given period.The noise levels of different squares in different time intervals were predicted along with their equivalent noise levels (L eq ).The value of L eq in dB (A) unit is calculated by using the formula of Robinson.
In the present study we developed some models to estimate traffic noise annoyance models with respect to traffic noise and traffic volume.
Traffic Noise Index (TNI)
Traffic Noise Index (TNI) is another parameter, which indicates the degree of variation in a traffic flow.This is also expressed in dB (A).
Traffic Volume (Q)
The noise level near the highway depends on the number of vehicles.The noise level increases with an increase in traffic volume.Traffic volume is defined as the total number of vehicles passing a given point during a specific period of time or the number of vehicles that pass over a given section of a lane or a roadway during a specific period of time.
Material & Method
The present research work is based on primary surveys, wherein relationship of degree of annoyance with equivalent traffic noise and traffic volume have been developed on the basis of traffic noise survey, residents' perception survey and traffic volume survey.The surveys have been conducted at six study area locations in Delhi.But the data and developed model of one location i.e.Soami Nagar is discussed in the paper.The residents' perception has been recorded in terms of their degree of annoyance with respect to noise levels.There are five verbal scale degree of annoyance recorded viz."tolerable (1)", "slightly intolerable (2)", "intolerable (3)", "very intolerable (4)" and "extremely intolerable (5)".After conduct of primary surveys data has been analyzed for equivalent traffic noise levels, traffic volume & its composition, residents' perception on traffic noise in terms of degree of annoyance.The equivalent noise levels and corresponding residents' perception have been determined with respect to five time bands in a day.The time bands have been divided based on temporal variation of traffic and time duration suggested by MoEF (The Ministry of Environment & Forests) for day and night.The present study was undertaken in 2011, at New Delhi, capital of India.The 24 hour traffic volume survey and noise measurement survey are conducted at Soami Nagar, New Delhi and thereafter resident's perception data has been collected by interviewing of 62 households for different five time bands.
Traffic Volume
Classified traffic volume survey has been carried out for 24 hour in a working day at Soami Nagar.To differentiate passengers and goods modes, classified traffic volume survey has been carried out for 24 hours.The survey has been carried manually with the help of twenty enumerators at a time through tally marking, wherein all categories of modes are recorded separately for both directions of traffic.The 24 hour traffic volume data has been grouped into five time bands based on temporal variation of traffic and day/night noise limit timings.Observed Average Daily Traffic (ADT) worked out to 1,88,890 vehicles/1,96,414 PCUs with peak hour traffic of 8.2% at 10:00-11:00 hour, please refer Table 1.
Traffic Noise
Traffic noise measurement survey has been carried out for 24 hour in a working day simultaneously along with the traffic volume survey at Soami Nagar.The estimated L eq (day) and L eq (night) in this locality worked out to 68.1 dB (A) and 64.2 dB (A) respectively as presented in Table 2.These values are above the prescribed limit by Ministry of Environment and Forest (MoEF), Government of India of 55 dB (A) and 45 dB (A) in day [11] and night respectively.
Residents' Perception
Sixty two residents have been interviewed to find out their perception on traffic noise.The data has been collected with respect to five time bands.52.4% residents informed that their annoyance level is intolerable (4) followed by 41.9% informed extremely intolerable (5) and 5.6% residents informed it as intolerable (3).
Reliability of Test Instrument
Any research based on measurement must be concerned with reliability of measurement.A reliability coefficient demonstrates whether the test designed is correct in expecting a certain collection of information to yield interpretable statements about individual differences.Validity and reliability are two fundamental elements in the evaluation of a measurement instrument.Instruments can be conventional knowledge, skill or attitude tests, scientific simulations or survey questionnaires.Reliability is concerned with the ability of an instrument to measure internal consistently Tavakol Mohsen, et al. [12].The estimation of reliability of a test item is cheeked by chronobach α.
The Cronbach α is defined as N is the number of test items in the test instruments.
2 X σ is the variance of observed total test scores.
2 Y σ is the variance of component for the current sample of persons.
In the present study we calculated the reliability of test item by using SPSS (Version 19.0,IBM Chicago) software and it was found to be 0.71 which is acceptable.The validity has been estimated as r 1∞ = (r 11 ) 1/2 .Therefore, validity of test items in the present study is r 1∞ = (0.71) 1/2 = 0.84, which means that the test measures true ability to the extent of 84% and this is acceptable.
Selection of Model for Noise & Annoyance
Best noise indicator chosen to represent the noise source is L eq [10], which had been used to examine the relationship between noise exposure and annoyance.The following four regression models were tested to prepare relationship between noise exposure and annoyance: 1) Log-Linear.
The results of the above mentioned four regression models are presented in Table 3, along with the statistical outcomes associated with different models.Among above mentioned models, correlation coefficient "r" and coefficient of determination "R 2 " of Log-Linear relationship were observed maximum.Therefore, it has been chosen to prepare the relationship between noise exposure and annoyance.
Traffic Noise Annoyance Models
The following traffic noise annoyance model has been developed for Soami Nagar by utilising day time equivalent traffic noise and corresponding annoyance data.For this our proposed model is given by, ( )
Discussion
Noise based mathematical model would predict the annoyance of community with better accuracy and is acceptable for this study area.This also provides evidence for the fact that annoyance is more related to the noise levels and also depicts its relationship with traffic flow.It is clear from Table 3 that log linear regression model has high value of R 2 .The role of R 2 can be assumed as an indicator of model which reflects that how much the data fit the model and it is also called coefficient of determination.The model 1 is developed to assess traffic studied the concerns about the noise nuisance resulted from the operation of urban railways (Metro) in Wallsend and Walkergate, UK.
Table 3 .
Comparison of various models for noise and annoyance.
. Development of Noise Annoyance Models with Respect to Traffic Volume The
traffic noise is generated through motorised traffic.Motorised traffic is broadly divided into two categories viz.passenger traffic and goods traffic.The relationship between responses in regard to annoyance and observed noise levels has been studied in the previous sections.But in this section, an attempt has been made to develop the relationship between responses in regard to annoyance and passenger & goods traffic by conducting multiple regression analysis.The annoyance is considered to be dependent variable while passenger traffic & goods traffic turn into independent variable.For this our proposed model is, | 2,501.8 | 2015-08-25T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
A Conceptual Study of Blockchain to Financial Sector
A Blockchain technology is the basis of Bitcoin which has acknowledged widespread attentions recently. It is also known as distributed, indisputable and digital ledger which registers the transactions in the same order it is generated in a close real-time. In blockchain, transactions take place in a decentralized manner. The subsequent transactions can be included to the ledger only by the agreement of the participants in the network which is known as nodes The subsequent transactions can be added to the ledger only by the agreement of the participants in the network which is known as nodes. The applications of blockchain extending from banking, crypto currency and financial services, risk managements, social services and Internet of things. This paper explains a broad impression of blockchain technology in banking applications and recent advances.
These are the scripts executed in blockchain environment. The verification is carried out by customers in the blockchain environment. This ensures honest execution of the -contract.‖ Blockchain 3.0: decentralised applications (DApps.) whose backend is placed on blockchain and storage of data in distributed ledger and can have front end stored on decentralized storages such as Ethereum's Swarm. Blockchain 4.0: based on automation of business scenarios in real time. Some or the examples include ERP, Supply chain management, financial transactions, banking, condition-based payments, IoT and asset management, etc. [6][7][8][9] The remainder of the paper is as follows. Section II explains steps involved in blockchain process. Section III identifies consensus mechanism. Section IV introduces smart contract and Section V describes applications of blockchain technology in banking applications. Section VI highlights the issues in blockchain technology. Lastly Section VII concludes the paper.
Blochain Process
In its most simplify form, the blockchain communication go through the following steps to get into the blockchain: 1. All the nodes in the system receive the connections and it is tested for accept or reject the transactions.
2. To avoid double spending of the transaction, it is broadcast to all the nodes in the system and the genuineness of the transactions are verified.
3. Even the nodes may cluster several transactions into blocks to share with other nodes in the system. The formation of new blocks will be controlled by consensus.
4. Once the nodes accept the block it will be added to the blockchain network with the help of its hash value.
Consensus Mechanism
Blockchains create direct network which can self-correct in absence of a third party to enforce the rules. Data stored on the network as a whole, by definition it is public and data once stored are transparent and cannot be modified by changing any information on the blockchain. This is accomplished by the enforcement of rules through their consensus algorithm. The malfunctions are avoided in the blockchain with the help of consensus mechanism. The validity of the transactions is decided by the consensus algorithm and forking problem is removed in blockchain. When miners parallelly mines a block of transaction then forking problem arise and is avoided by adding a new fork in the linear form of blockchain using consensus mechanism. The longest chain rule concept is used to resolve the forking problem.
Proof of Work ( PoW ):
Proof of effort is a blockchain consensus algorithm, applied by Satoshi Nakamoto in 2009. It was firstly used by Bitcoin then later adopted by Ethereum. To add any new blocks into the blockchain, an algorithm is required. The main goal of Proof of work protocol is preventing cyberattacks such as a distributed denial-of-service attack (DDoS) by which transferring numerous forged requests will be send to the CPU with the determination of draining the resources. In POW, individual nodes of the network need to compute a hash value for the continuously varying block header. The consensus needs that the computed value should less than or equal to some given value. All the nodes in the decentralized network, needs to calculate the hash value uninterruptedly by means of diverse nonces until the goal is reached. When one node finds the appropriate value, all other nodes must jointly confirm the precision of the value. Based on this mining procedure a new block will be created in the blockchain for all the transactions which are used to find the authenticity. The nodes which compute the hashes is known as miners and the miner who resolves the problem first gets a reward. The party with the most power usually mines the block and others just gets their energy wasted because multiple miners compete to create a block at one instance. [10-14]
Proof of Stake ( PoS )
Blocks can also be produced in a blockchain using PoS. The block producers are named as Validator rather than miners. Validators take their chance on the basis of some choice algorithm. The choice is based on account balance, the richest person may guarantee to be leading in the network. As a result, many solutions are projected with the mixture of the chance size to choose which one to form the next block.. In turn, the least hash value along with the size of the stake is used to compute the next generator. Only selected validator can build a block and others cannot play a part, hence saving the energy of the other validators. The validators can get remuneration for honesty and loses their chance if it does mistake. The miners get their transaction price since they do not get remuneration unlike PoW. [15][16][17][18][19]
Delegated proof of stake (DPoS)
In DPoS, block producer are nominated by votes by the one who has network tokens. Block producer candidates that receive the most votes are the one who can produce blocks. Users can also give their voting power to another user who can vote on their behalf. In DPoS is based on opensource protocols meaning that if the users disagree with the majority they can fork. Block producers can be voted in or out at any time, so the risk of loss of income and status is one of the major encouragements in difference to bad behaviour.
Practical byzantine fault tolerance (PBFT)
In PBFT, a new block is resolute in a round where in each round, the most important will be selected according to some rules. PBFT needs that every single node is known to the system. PBFT consensus method does not need any hashing algorithm to agree interactions in a blockchain, which suggests there is no obligation for high energy consumption and the risk of centralization is lesser than in equally of blockchain mechanisms. PBFT is currently being used by the Hyperledger venture, which allows developers to construct their own particular digital resources on a disseminated ledger.
Tendermint
Tendermint is used by the developers for securely and consistently replicate the applications written in whatever programming language and advance setting is right for them. By strongly, we mean that Tendermint works even if up to 1/3 of equipment fail in accidentally and every non-faulty machine realizes the identical transaction log and calculates the same state. Tendermint consists of two chief procedural workings: a blockchain consensus engine and a generic purpose edge. The consensus engine, named Tendermint Core, authorizes that the same business are documented on every machine in chronological order. The application interface, otherwise known as the Application BlockChain Interface (ABCI), allows the transactions to be managed in any programming language.
Smart Contract
A smart contract is a computer code running on top of a blockchain comprising a set of rules under which the parties approve to cooperate with each other. It is a decentralized automation relating two or more parties and digital assets, where assets are deposited by the parties into the smart contract and the resources automatically get reallocated with the parties based on a formula and based on certain data, which is not identified at the time of contract initiation. It enables safety and permits no third party to intervene to avoid fraud or criminal acts. All transactions performed in smart contracts are permanent and trackable. A smart contract describes penalties and rules and also imposes the responsibilities in the agreement automatically.
Ethereum's
Ethereum is an open-sourced immutable blockchain-based platform that permits smartcontracts and has a Turing-complete programming language for launching distributed applications. It can run all blockchains and protocols. All the nodes in the Ethereum network runs the Ethereum VM for smart contract execution in a distributed execution. Ethereum is an efficient protocol for application development to design smart contracts. The distributed ecosystem of Ethereum, includes components like -Ethereum-Swarm‖ -a decentralized file-serving method, -Ethereum-Whisper‖ -a P2P procedures and syntax for cryptographic messaging system to diminish risk between agents in trustless networks.
DApps
DApps stands for decentralized applications that cannot execute on a centralized machine. Dapp runs on a distributed network nodes It protects participant information. Smart contracts allow DApps to connect to blockchain technology for conducting pre-programmed operations. A smart contract or DApp is well-defined in a Ethereum as a transaction protocol to accomplish group of contracts on a cryptographic blockchain. Examples of DApps are OpenBazzar, LaZooz, Twister, Gems etc.,
DAOs and DACs
Decentralized autonomous organizations / corporations are more complex form of a decentralized application. They are notions derived from AI. In a DAO/DAC, smart contracts are running on blockchains that perform ranges of predefined tasks based on conditions. and changing events. Smart contracts operating on the blockchain can perform the functions of real world as well as can instantiate the model to an autonomous corporation.
Blockchain Technology In Banking
Blockchain is a dispersed ledger of transactions, a multi-tiered technology that potentially organize the behavior of customers and their assets based on a mode of transaction ledgers. Registering the electronic transactions in a global insurance blockchain makes transaction scam not possible. Authentication of the transaction accuracy is instant and can be performed by anyone at anywhere. The blockchain may moderate the processing costs appreciably by storing the information in blocks. All major banks can exercise blockchain which could be used for transfer the money, maintain the records and additional back-end tasks. The function of blockchain can replace the paper-based global trade business procedure to an electronic decentralized ledger that offers all the participating entity, including banks, the competence to access a single source of information. It also permits them to pursue all documentation and authorizes ownership of assets digitally, as an irreversible ledger in real time.
Crowd funding
It is a method of raising the capital for a big project such as scientific projects, space research etc., from large amount of people where each one can contribute a small amount of data. In a traditional approach the amount donated by the crowd will be collected by a single organisation where malpractices, fraud, information asymmetry may arise during the fund raising activities. Blockchain is an emerging technology where companies can make and verify financial transactions on a network directly without a central authority. Each transaction made in the blockchain needs to be approved and preserved in the ledger maintained by all the nodes.
Fig.2. KYC Verification in Blockchain
KYC is a major aspect in the battle against financial scam and money laundering. KYC check is the compulsory process of identifying and verifying the individuality of the client when opening an account. KYC procedure comprise ID card authentication, face authentication, document proof such as utility bills such as address proof, and biometric verification. However, in traditional approach KYC is a repetitive process done by individual organization and stored in their database.
Trading Platform
The blockchain skill offers a possible new intermediate to exchange assets with no centralized trusts or mediators and without the risk of spending twice the cost. Blockchain can reduce the risk or the threat of fraud in all areas of banking, and this could equally apply to a trading platform. For each high value property a digital token will be issued to the owner, stating the -certificate of authenticity‖. The token will be moved every time the product is sold or brought and the new ownership will be created and stored in the blockchain. The advantage of the digital token is the final recipient or the current owner of the product can be verified from the chain of protection to all the way back to point of creation. By this way bank can use blockchain as a secured trading platform.
Payment process
Electronic payment is the fast and easy way to perform the transactions. But in today's payment processing service the -beneficial ownership‖ rules makes the transaction time much higher. Ripple is a -real-time gross settlement system‖ (RTGS), money exchange and allowance network with no chargeback. However it is a proprietary blockchain system and did not connect with other system. It is much better to have a global blockchain system to connect varies organizations throughout the world so that transactions can be performed very easily without fraud.
Fraud Detection
Most banking systems in the world, built on a centralized record are more at risk to cyber attack. Blockchain is being accepted as the circulated technology that would reduce fraud. In traditional banking environment is based on paper transaction and electronic payments such as Paytm, Google pay, PayU, etc., and the malfunctions in these transactions can be intentional or unintentional. Private and immutable ledger that enables transactions between the cross banks in a transparent and secure manner.
Scalability
Blockchains are having trouble in effect with supporting a large number of users on the network. Scaling methods have to be verified before implementation into the ledgers.
Privacy
The Bitcoin blockchain is considered to be freely visible. All the information pertaining to a transaction is available for anyone to view. For example, private patient data, government data or financial data should not be available for all as is the case with proprietary industry data. Costs Blockchain is an effective tool for reducing costs. It reduces the fees related with transferring the value and can update operational processes. However, because it is a relatively new innovation, it is difficult to combine it with legacy systems. Such a process is likely to be an expensive issue that many corporations and governments will be unwilling to undertake.
Future Enhancement and Conclusion
Blockchain has likely for transforming traditional business with its key characteristics: decentralization, persistency, ambiguity and auditability. Blockchain refers to a tamper-proof circulated ledger which solves the problems in centralized model. However efforts spent for integration of blockchain to business processes are still at infantacy. Sharding, Editable-blockchain, IoT-specific consensus are some of the key area needs future enhancement. Blockchain can well be combined with Bigdata technology where data management can be handled in a circulated environment and transactions on blockchain can be used for analytics to obtain model. Now a days blockchain based applications are emerging rapidly and we plan to conduct in-depth investigations in banking applications. | 3,446.8 | 2020-09-25T00:00:00.000 | [
"Business",
"Computer Science",
"Economics"
] |
ON INFORMATICS VISUALIZATION
— Bad road conditions may lead to road accidents, especially when drivers are unaware of potholes. The presence of potholes can increase from time to time and may get worse due to road age and bad weather. With the Internet of Things technology, vehicles on the road can be a means of collecting road condition data, such as vibration. The raw vibration data are useful only after they are processed into meaningful information. Information about the condition of roads can help other road users be aware of potholes. This paper proposes an Internet of Things application for road conditions detection. We design and implement a device comprising one NodeMCU ESP8266, one accelerometer gyroscope sensor to detect the existence of potholes based on the amount of detected vibration, and a GPS module to get the information about potholes' locations. For the web service, we use REST API so that users can get real-time potholes' information in the Android application. To cluster potholes based on detected vibration, i.e., deep, medium, and shallow, we implement the k -means clustering algorithm with k = 3. The Android application utilizes a Google map to visualize potholes' locations and the result of clustering on a road map. We use colored pins to indicate the depth of potholes. Deep potholes are shown on the map using red pins, medium potholes using orange pins, and shallow potholes using green pins.
I. INTRODUCTION
Transportation is a very important supporting factor that influences the economic growth rate in modern society.Hence, it can be a driving force for the dynamics of the urban development of a city.In big cities, the development of transportation increases rapidly.This is due to the high mobility and activity of the urban population.Manado is one of the largest cities in Sulawesi Island.It is the capital of North Sulawesi province with an area of 162.53 km 2 .Manado has 11 sub-districts with a population of around 451,916 people in 2020 [1].Large cities like Manado have several types of roads, such as main, primary, secondary, and local.
Roads are important infrastructures for land transportation used by the community in their daily activities.In addition, roads are also important to accelerate the economic relationship between one region and the others.Because of these reasons, roads must be adequate to be used.Proper road conditions will support better mobility, provide comfort and ensure the safety of people on the roads.However, the higher the rate of daily activities, the higher the road traffic load.Road conditions may also change gradually because of road age and bad weather.These cause damage to roads, such as potholes.Potholes reduce driving comfort and cause various problems, such as congestion, vehicle damage, and even traffic accidents.These problems may occur as road users do not go through these roads frequently, so they are not familiar and aware about the road conditions.
Nowadays, wireless-based network access and resources are developing and replacing the use of wired networks.The Internet of Things (IoT) is a current technology that is recently being developed because of its advantages in terms of functionality, performance support, and wireless capability.The Internet of Things aims to make it easier for people to interact with all devices connected to the Internet.
In digital age, plenty of data are generated every second from many sources.Inexpensive storage technology has made it possible to maintain all of the data.However, raw data are useless until they are processed to produce useful information.Therefore, it is necessary to analyze the raw data to extract some knowledge.Data mining provides an efficient way to analyze very large data sets and to extract some useful or possibly unexpected patterns in the data.In data mining, clustering is a technique to group the number of data or objects into clusters or groups to find data with the same and different characteristics from objects in other clusters and calculate the optimal number of clusters to produce better clusters.
This paper proposes a design and implementation of a realtime application to detect road conditions by utilizing the Internet of Things technology.The contributions of this paper are a wireless system to detect road conditions and a clustering model to cluster potholes on the road based on their depths.The wireless system uses a sensing device that has one NodeMCU ESP8266, and a GPS module (Ublox NEO-6M-0-001 GY-GPS6MV2) to get the locations of potholes and also an accelerometer and gyroscope sensor (MPU-92/65) to detect vibration.Furthermore, the k-means clustering algorithm is utilized to cluster the depth of potholes as deep, medium, and shallow by using the detected vibration data.The clustering results are visualized on a road map in an Android application.Users can then use their smartphones to view real-time information regarding road conditions.
II. MATERIALS AND METHOD
The Internet of Things enables objects to connect by using the Internet and sending data and information over a network.With relatively affordable computers and ubiquitous wireless networks, it is possible to change anything from the smallest to the biggest.The Internet of Things is a result of the optimization of several devices to make it easier for humans to interact with all equipment connected to the Internet [2].
The Internet of Things has developed very rapidly in the last few years.This seamless communication can be done with every person, process, and object.All devices with or without batteries [3] -can share and collect data with lowcost computing.In today's interconnected world, digital systems can monitor, record, and adjust all interactions between connected things [4].Consequently, humans as users are less involved in the implementation process.
Nowadays, the rapid development of information technology has produced piles of data.Data mining techniques can be applied to look at hidden or previously unknown knowledge from a data set.Several applications of data mining in the literature include using one of the data mining techniques to assist companies in classifying customers accurately based on customer asset outlier data [5], predicting the sex of tarantulas using a classification model [6], finding frequently connected or popular groups of users with common interests and recommending friends for mining a large social network [7], and analyzing public sentiment from tweets against the presidential candidates [8].
Applications of the Internet of Things and data mining techniques have been implemented [9], [10] to detect incoming and outgoing vehicles and count how many vehicles are entering and leaving a parking space.In this application, moving objects are classified as cars, motorcycles, or people.The real-time information provided by this Internet of Things application can supply users with complete information about the parking lot.Angdresey et al. [11] classified the sensor data from a swimming pool to determine its water quality.In this application, swimming pool water is classified as either clean or dirty.An application to monitor ornamental plant soil in a pot to recommend treatment for the plant [12].In this application, three treatment categories were classified for plants: need fertilizer, need water, or normal.These are based on sensor data, including soil pH, air temperature, soil moisture, and air humidity.An application to monitor water conditions in an aquarium has been proposed by Angdresey et al. [13].This application will notify users about the water condition, whether it is normal or needs to be changed, based on water level, turbidity and temperature.
In data mining, different techniques are used -one of which is clustering.Clustering is a technique to identify object classes that have similarities.Using a clustering technique, one can determine the density and distance of area in an object space and identify the overall distribution pattern and correlation between attributes.The k-means clustering is a non-hierarchical clustering algorithm that works by partitioning data into several clusters or groups [14].Hence, one cluster has data with the same characteristics, while two separate clusters have data with different characteristics.
Fakhi et al. [15] implemented k-means by dividing the problem into sub-problems that are handled independently on streaming multiprocessors from one or more GPUs, using the latest generation of GPUs with Compute Unified Device Architecture (CUDA).Daoudi et al. [16] compared the three most efficient implementations of the k-means algorithm.This study shows good results in acceleration effects for data sets.An adaptive image segmentation technique that uses the k-means algorithm to examine different image objects has been proposed by Venkatachalam et al. [17].The aim is to produce accurate results with an easy process and to avoid bilateral input of k values.Grouping using the k -means algorithm gives good results.
The problem of detecting road conditions has been studied in the literature.A road condition detection system that consists of an Arduino and a smartphone has been proposed by Chen et al. [18].In this system, sensor data from an accelerometer are evaluated in Arduino to find the acceleration's average and slop.Arduino then sends the evaluation results and raw data to the smartphone via Bluetooth.This system does not incorporate any data mining techniques for data analysis.A threshold was only used to determine the road conditions.If the vibration level is higher than the pre-determined threshold, the smartphone will automatically record the GPS position.
Road roughness classification is studied by Chen et al. [19], that used a GPS and an accelerometer to gather road roughness data.The data, including the three-axis acceleration, velocity, location, and time, are sent to a server for further processing.The server classifies the roughness levels, estimate the international roughness index, and analyzes the power spectral density of surface roughness.However, after the server analyzes the data, potholes' locations are not informed to road users.Meanwhile, participatory sensing to detect potholes has been studied by Medins et al. [20], that used Android OS-based smartphones with accelerometers as the hardware platform.They use several algorithms based on threshold and standard deviation to detect potholes.
In this paper, we use the Internet of Things concept together with a data mining clustering algorithm, i.e., the kmeans algorithm, to detect potholes, cluster them based on their depth (deep, medium, and shallow), and inform users about their locations.We choose k = 3 [21]; potholes are categorized into three damage levels based on their depths, i.e., lightly damaged, moderately damaged, and badly damaged.We firstly describe the wireless system as a whole and then describe the hardware and software separately.Specifically, we explain their detailed design and implementation.
A. The System
The wireless system to detect road conditions is illustrated in Figure 1.The sensor device in this system consists of one NodeMCU ESP8266, a GPS module (Ublox NEO-6M-0-001 GY-GPS6MV2) to detect the location, and an accelerometer gyroscope sensor (MPU-92/65) to detect vibration.As the web service to bridge end users and the sensor device, we utilize REST API (Representational State Transfer Application Programming Interface).Users can then use our Android application, which is installed on their smartphones, to access information regarding road conditions.
B. The Hardware
The sensor device (hardware) design is depicted in Figure 2, where a NodeMCU ESP8266, a GPS module, and an accelerometer gyroscope sensor are connected on a breadboard.We connect the accelerometer's SCL pin to the D1 pin on NodeMCU, the accelerometer's SDA pin to D2 pin on NodeMCU, and the GPS's TX pin to D3 pin on NodeMCU, and the GPS's RX pin to D4 pin on NodeMCU.Then, we connect VCC pins on GPS and accelerometer to both 3V pins on NodeMCU.Furthermore, we connect GND pins on both GPS and accelerometer and the battery's negative pole to G pins on NodeMCU, while the battery's positive pole to VIN on NodeMCU.We show the summary of pin configuration for the sensor device in Table 1 and the sensor device's real image in Figure 3.
C. The Software
The software is divided into programs on the sensor device (the client-side) and the server-side.Figure 4 shows a flowchart for the client-side, where the processes start from sensor initialization, data gathering, to data transfer.To minimize overhead, sensor data are sent only if potholes are detected.A pothole exists when an accelerometer reading is below or above certain thresholds.Processes on the client side are going on and on in a loop as long as the sensor device's battery still has power.Figure 5 shows a flowchart for the server-side, where the workflow starts when the server receives data from the sensor device.The data are then stored and processed using the kmeans clustering algorithm with k = 3.These include calculating centroids, calculating Euclidean distances to the centroids, and grouping data into clusters.The clustering results are sent to the Android application, and the processes repeat.We explain the k-means clustering algorithm in detail in the following section.
In this application, we implement three program modules: the k-means function, Google Maps API, and the HTTP methods.We use the k-means function with k = 3 to cluster vibration data into deep, medium, and shallow potholes.We use HTTP to connect the application, sensor device, API, and database.Finally, Google Maps API is used to inform users regarding potholes' locations on top of road maps.
D. Object Clustering Method
Clustering is a data mining technique to organize data into several clusters or groups, where data between clusters have a minimum similarity and data in one cluster have a maximum similarity.Objects in a cluster are different from other clusters, but they have similar characteristics to each other in their clusters.In order words, clustering is a data segmentation that divides multiple data sets into groups according to their similarities.Two clustering methods can be used to group data: hierarchical clustering and partitioning.This paper uses the partitioning method, i.e., the k-means clustering algorithm.
The k-means clustering algorithm aims to minimize the objective function set during the clustering process by maximizing variations with data in other clusters and minimizing variations between data in a cluster.In the kmeans clustering algorithm, firstly, we need to determine how many clusters we need, and the number of formed clusters must be less than the number of existing data (k < n).Secondly, we need to initialize a centroid and calculate distance space or data distance to the centroid by using the formula to calculate Euclidean distance as given in Equation 1, where D is the Euclidean distance from data to the centroid, a is data, and b is the centroid.
At iteration = 1, the data distances to centroid C1 are as follows: Furthermore, the data distances to centroid C2 are: Data are then grouped into clusters based on their shortest distances to centroids.For example, data1 joins cluster 1 because D(data1, C1) < D(data1, C2), while data2, data3, and data4 join cluster 2 because D(datan, C2) < D(datan, C1) for n = 2, 3, 4.After all data get their cluster labels, a new centroid for each cluster is calculated by averaging the value of its data.That is, by summing all value of data in that cluster and dividing it by the total number of data in that cluster.In this paper, we use k = 3 ac to the depth of potholes, i.e. deep, medium, and shallow.The Euclidean distance formula for k = 3 is shown in Equation 2.
( , ) = ( − ) + ( − ) + ( − ) (2) For the following iterations, new centroids are calculated again.Data are grouped again into clusters according to the shortest distances to the previously found centroids.Then, the data value in each new cluster is averaged to find its new centroid.After obtaining cluster labels for all data and if all centroids do not change, the clustering process is complete.
III. RESULTS AND DISCUSSION
We conduct some experiments to evaluate the wireless system to detect road conditions in Manado -the capital of North Sulawesi province.We secure one sensor device on a motorcycle (at the front part).A user with the device attached to his motorcycle can acquire sensor data and update the database.On the other hand, a user without the device can only use the Android application to view the road conditions.In this experiment, our smartphone has Android 6.0 API 23 (Marshmallow).
The sensor device detects a pothole if one of the accelerometer readings is below or above certain thresholds.Based on some trials, we define the thresholds, i.e. a pothole is detected if a value from any accelerometer axes is greater than 5000 or less than -5000.When the sensor device detects a pothole, it transmits the sensor data to the database through API.
Figure 6 shows data received by API from the sensor device when we check data transmission.The data include the three-axis accelerations denoted by ax, ay, and az, and also the coordinates of the pothole's location (latitude and longitude).The data set of potholes that are obtained by the sensor device is shown in Table 2.We cluster the data by using the k-means clustering algorithm with k = 3.Hence, the sensor data are clustered into shallow, medium, and deep potholes.The clustered sensor data are shown in Figure 7.The three axes represent the acceleration in the x, y, and z directions.Red dots are data clustered as deep potholes, orang dots are medium potholes, and green dots are shallow potholes.In addition, we also provide pothole information, including the clustering result, latitude, and longitude in a text format.Figure 9 shows an example of a pothole description that is displayed when a user chooses a pin on the map.In this paper, we design and implement a real-time Internet of Things application for detecting road conditions.Our sensor device is built with a NodeMCU ESP8266, an accelerometer, and gyroscope sensor (MPU-92/65) to detect vibration, and a GPS NEO-6M module to obtain potholes' locations.As the web service, REST API is utilized to bridge end users and the sensor device.We implement the k-means clustering algorithm with k = 3 to cluster the vibration data to identify the depth of potholes, which are either deep, medium, or shallow.Users can view the visual representation of potholes' information on a road map in the Android application.
Fig. 1
Fig. 1 The wireless system A pothole is detected when one vibration data is below or above a threshold.When the sensor device detects a pothole, NodeMCU sends a request to API.Communication between NodeMCU and API is established by using the HTTP (Hypertext Transfer Protocol) methods.Data transmitted by the device are the latitude and longitude of the pothole's location and the three-axis accelerations.API uses the kmeans clustering algorithm with k = 3 to process data stored in a database.Data are clustered to define the depth of potholes, i.e., deep, medium, and shallow.The clustering results are then recorded back to the database, organized in a JSON (JavaScript Object Notation) format, and then transmitted to the Android application by API.
Fig. 6
Fig. 6 Data received by API
Fig. 8
Fig.8Google map in the Android applicationFigure 8 shows Google Maps in the Android application.It visualizes the result of clustering and the information about potholes' locations on a road map.In this application, we use colored pins to indicate the depth of potholes, where red pins indicate deep potholes, orange pins represent medium potholes, and green pins represent shallow potholes.In addition, we also provide pothole information, including the clustering result, latitude, and longitude in a text format.Figure9shows an example of a pothole description that is displayed when a user chooses a pin on the map.
Fig. 9 A
Fig. 9 A pothole description
TABLE II DATA
SET OF POTHOLES | 4,414.2 | 2022-09-30T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Risk Evaluation of Strategic Indicators
Corporate management increasingly demands strategic decision support and the use of scientific tools and methods of modelling uncertainties, thus creating a connection between decisions and their expected outcomes. To put it differently, corporations want to bear the risk of their decisions consciously in order to maximise their profits. For this reason, risk analysis and risk management are highly topical issues in corporate practice. The literature of risk management introduces many different tools and methods to carry out risk analysis. However, as we studied the available sources we found that they were difficult to apply, as they were described in a language too difficult for practicing professionals to understand, and illustrative examples were rarely used. In other words, the methods recommended in specialised literature are generally not user-friendly. Rather than providing a scientific classification of the methods offered in professional literature or proposing their enhancement, the primary aim of this paper is to put forward a theoretically well-based risk analysis approach that is easy to use in corporate practice. This method will be discussed in the next section. Before explaining the detailed methodology, however, we feel it essential to define shortly the concept of risk management in order to facilitate a better understanding of the topic. One of the essential features during a decisionmaking process is the existence of uncertainties. Uncertainty means that the probability of occurrence of a given future event and its consequences are not known exactly. Risk usually means the particular negative or positive consequences while the occurrence itself is uncertain, but its probability can be calculated or estimated (Görög 2008). In order to assess the risk, different risk sources and events should be first identified. According to Hillson’s approach, risk usually refersto uncertain events that may have negative or positive outcomes (Hillson 2002). The inherent level of a particular risk is determined by the likelihood and magnitude of associated events (Hopkin 2012).
INTRODUCTION
Corporate management increasingly demands strategic decision support and the use of scientific tools and methods of modelling uncertainties, thus creating a connection between decisions and their expected outcomes.To put it differently, corporations want to bear the risk of their decisions consciously in order to maximise their profits.For this reason, risk analysis and risk management are highly topical issues in corporate practice.
The literature of risk management introduces many different tools and methods to carry out risk analysis.However, as we studied the available sources we found that they were difficult to apply, as they were described in a language too difficult for practicing professionals to understand, and illustrative examples were rarely used.In other words, the methods recommended in specialised literature are generally not user-friendly.Rather than providing a scientific classification of the methods offered in professional literature or proposing their enhancement, the primary aim of this paper is to put forward a theoretically well-based risk analysis approach that is easy to use in corporate practice.This method will be discussed in the next section.
Before explaining the detailed methodology, however, we feel it essential to define shortly the concept of risk management in order to facilitate a better understanding of the topic.
One of the essential features during a decisionmaking process is the existence of uncertainties.Uncertainty means that the probability of occurrence of a given future event and its consequences are not known exactly.Risk usually means the particular negative or positive consequences while the occurrence itself is uncertain, but its probability can be calculated or estimated (Görög 2008).In order to assess the risk, different risk sources and events should be first identified.
According to Hillson's approach, risk usually refersto uncertain events that may have negative or positive outcomes (Hillson 2002).The inherent level of a particular risk is determined by the likelihood and magnitude of associated events (Hopkin 2012).
DECISIONS
In this section the different approaches how to identify, analyse, evaluate and treat the risks will be highlighted.
Interpretation of risk management
It is interesting to investigate how risk analysis and response work in practice if there are insufficient historical data available.In the risk management literature a number of methods can be found that are suitable for risk assessment.Most of them can only be used if there are historical data available, as they rely on statistical analysis to assess risks (see e.g.Jorion 1997).If someonewould like to calculate exchange rate or interest rate risk exposure, for example, these statistical methods can be used if daily databases are available.But what is the situation if somebody would like to assess risks having an impact on the strategic goals of the company where he or she is working?An example could be to select the best strategic alternative by evaluating the yield/risk ratio for each alternative.In this case, there is rarely a daily database to use for assessing most risks.Of course, the probability of occurrence and impact of these risks should always be assessed (estimated) in a reliable manner.
There are also different approaches available to assess risks.These can be divided into two main categories: qualitative and quantitative methods.Qualitative methods are easy to use in practice, but reliability may not possible to ensure.Quantitative methods may ensure the reliability of analysis, but usage of them requires a large amount of historical data.
It seems an obvious suggestion to produce input data for quantitative methods (e.g.Monte-Carlo Simulation) by using the many years' experience of participants attending aworkshop to ensure reliable risk assessment.Of course, a special methodology is necessary for this, but it is worth to apply.The method presented below has been used in more than 50 different applications to date.The aim of this paper is to summarise the main steps of this method and to show how to use it in practice.
Risk management covers a systematic process of identifying, analysing, evaluating, responding to and controlling risk (Cooper & Chapman 1987;Chapman and Ward 2003), (PMI 2008).The risk management process for these steps is shown in Figure 1.The specialities of the process will be briefly summarised below even for a situation wherehistorical data are missing or inappropriate.
Identification of risk sources and events
The first task is to identify risk sources/ events in a structured form.Several techniques have been proposed for professionals to identify risk sources/events (Loosemore et al. 2006;Ohtaka & Fukuzawa 2010).
For the method in question, brainstorming is needed for executing the task.Workshops lasting a few hours or even days, depending on the nature of the task, can also be helpful.The composition of participants is important, since the results are influenced to a great extent by the presence or absence of experts having relevant knowledge.
In case of inappropriate historical data a pre-made database can be helpful to enhance the identification of risk factors (de Bakker et al. 2010;Bannerman 2008).This database can be customised according to the needs of particular organisations.There are different lists for this available in the risk management literature (see for example Summer 2000; Hartman & Ashari 2002;Chow & Cao 2008;Lind & Culler 2011).
Quantitative risk assessment
Identification of risk sources and events is followed by the step of quantifying the probability of their occurrence and impact.This paper focuses on how to use the developed methodfor defining input parameters of the Monte-Carlo simulation (Hertz 1964).
The first task is to delineate the scope of the analysis and to define the elements of the analysis target values.The next step is to identify and assign potential risk sources and events to each elementof analysis.The identification is done by experts at a workshop.
After the identification is completed, a maximum of four different scenarios (Watchorn 2007) will be assigned to each identified risk source and event.The next task is to estimate the subjective probability of occurrence and impact of each scenario.This is done by experts at theworkshop using their many years of experience.It is important to note that the sum of the subjective probability of occurrence of the maximum four scenarios cannot exceed 100%.
Following that, the existence of interrelation (if any) among the different risk sources and events must be assigned to one cash-flow element (Hunyadi et al. 1993).If found, its direction and intensity must also be investigated.(Thedirection is positive if an increase in one variable's value can cause another variable's value to increase and negative if a decrease in one variable's value can cause another variable's value to increase.The intensity can be measured by a correlation factor between -1 and 1 (Hunyadi at al. 1993)).To answer this question, experts' estimation should be used.Empirical experience shows that it can be assumed that the value of the correlation measuring the intensity between two probability variables can be maximum ±0.6 in the case of strongest intensity.So the experts attending the workshop only have to decide whether the intensity between two variables is strong, medium or weak using their experience.In this way they can estimate the value of correlations ranging from -0.6 to 0.6.Of course, it is not possible to calculate exact correlation values in this way.But it should be remembered that in this case there are insufficient historical data available to usestatistical methods for this task.
The next task is calculation of the expected value and standard deviation of eachelement using the results of the scenario analysis.These will be the input data for the Monte-Carlo Simulation.The expected value and standard deviation can be used for selecting critical risk sources and events as well.Inour understanding not every risk should be treated, anyway.This is because the cost of treatment can be higher than the cost incurred from the occurrence of the risk.To ensure the best efficiency of treatment activity it is vital to select the critical risks which should be treated in any way.To do this, a special rule can be used.According to this rule, a risk is critical if the value of relative deviation (ratio of standard deviation/expected value)is higher than a predefined threshold value.There has beenno exact equation to calculate the limit of any threshold value so far.It can only be defined by using the experience of a risk analyst.In thispaper we will show how to define the threshold values with regard to acase study.
If historical data are missing or inappropriate, the way suggested above can help to increase the chance of selecting the best suited probability distribution curve, mean value, and standard deviation belonging to it.This is the reason forperforming a scenario analysis first and running Monte-Carlo Simulation only after finishing the scenario analysis.
Selection of dependent probability variables is the next task.The change invalue of an independent probability variable can cause the change of value of a dependent variable.When all input data are at our disposal, Monte-Carlo Simulation is ready to run.Once the predefined number of iterations has been reached, the probability distribution of net present value with all characteristic statistical values (mean value, standard deviation, range, etc.) can be produced.The probability distribution can also contain the target value, so it is possible to compare the results of calculation before and after risk analysis.This is done with the support of any computer program for risk analysis found on the market (e.g.Oracle Crystal Ball, Palisade @Risk or Szigma Integrisk).
Steps of risk evaluation
Risk evaluation requirescreating a high-level network diagram, including: • the exact definition of activities, • definition of the duration of activities, • logical relationships between activities and • detailed resource and budget allocation (Grey 1995).These data are the target values (values before risk analysis).Each project activity will work as independent probability variables during the Monte Carlo Simulation.
The next step is to identify and assign potential risk sources and events that can have an impact on the duration and/or cost of every single activity (dependent probability variables) originally calculated.When identification is completed, the probability of occurrence and impact of each risk source/event will be estimated by scenario analysis as above (Cleden 2009).The interrelation among risk events and independent probability variables (duration and/or cost) should be analysed (Nakatsu & Iacovou 2009).
Thisis followed by selecting the probability distribution of the duration/cost of each activity with the use of the results of scenario analysis.In practice, the most frequently occurring distributions are the beta, gamma, triangle, lognormal, and normal distributions (Evans et al. 1993).After this, the parameters (expected value, standard deviation) characteristic of the given distribution should be calculated.The value of the probability of occurrence of activities after junctions in the network diagram should be estimated.It is important to keep in mind that the sum cannot exceed 100% (Grey 1995).
When all input data are available, the simulation process can be started.The length of the critical path and/or total cost of the project are calculated from a large amount of random data obtained from each probability distribution of the duration/cost of every single activity.This can be accomplished by any risk analysisprograms listed above.After reaching the predefined number of iterations, the probability distribution of the critical path and/or total project cost can be produced (Grey 1995).
Response to the risks
The risk management process has to formulate and execute risk response actions for critical risk sources and events selected previously.Risk response could have the aim of avoiding, sharing, transferring or accepting a risk by means of defining a risk response programme (Harris2009).It is important to consider the following aspects whenformulating a risk response programme: • The elements should have a quick-win characteristic, i.e. should be applicable quickly and at a reasonable cost.Reasonable costs mean lower cost than in case of occurrence of the risk event.
• Risk response actions should be measurable during actualisation.In case of an investment project it may be possible to increase the chance to finish the project on time and within the budget or to ensure the targeted project return.In other words, the execution of suggested risk response actions shouldmove the measured value closer to the target value (value before risk analysis).
It is important to assign a risk owner tothe proposed actions.A risk owner is a person or an organisation that is responsible for responding to a risk.Now we will present different risk response actions (Balaton et al. 2005): Risk avoidance -basically thiscovers those actions that are aimed atavoiding the occurrence.It is used when risk sources/events often occur and the likely impact is high (Pataki& Tatai 2008).An example of this could be the integration of check points, including internal regulation.Risk mitigation -this could be aimed at minimising the probability of risk occurrence by preventing the risk from occurring.A good example can be lobbying in order to influence lawmakers.Another approach is for the companyto prepare different actions in order to influence the impact, in many cases to increase the impact of positive risk events.A good exampleis business continuity planning.Transferring or sharing risks -thismeans finding a partner who consciously or unknowingly assumes at a certain pricelosses generated from potential dysfunctions.A typical case of risk transfer is insurance, but hiring an external contributor to implement a project could also be an example (Görög 2008).Risk acceptance -In this case, the risk cannot be avoided or transferred, or the likely impact is out of proportion with the costs of responding to it.This implies that management bears the magnitude of the risk consciously.
Risk controlling
The final step of the risk management process is performing risk control that covers updating the dataset, follow-up actions, and plan-fact analysis.
Risk management should be considered as a snapshot at a given moment.But it could happen that the kind of information that basically influences the results of analysis is found the next day.In this case, it is worth redoing the whole exercise.Of course, now the analysis can be done quickly, since it only consists of the transfer of the results from recording and assessing the new risk arising from new information.It could change the list of critical events that could modify the risk response actions.
The second element of control activity is following the risk execution program, which is based on risk response proposals.This could be considered as classical control activity and in the course of this the following tasks should be solved: overview of the situation, impact analysis, modifications based on impact analysis, ordering and publishing the modifications and the execution of modifications.
The third component of control is performing a planfact analysis after finishing the execution of the risk response actions.The aim of the analysis is to compare the post-program status with the pre-program status.The planfact analysis means an input for cost-benefit analysis (Rédey 2012), which can measure the effectiveness and efficiency of the risk management activity.
RISK EVALUATION IN THE CASE OF STRATEGIC INDICATORS
The University of Miskolc has prepared and approved an Institutional Development Plan that includes the strategic goals and the related performance indicators (in harmony with the Balanced Scorecard -BSC indicators) annually for a five-year period.Achieving the target values of the five-year period may be influenced by various strategic risks, positively or negatively.It is essential for the university to identify and understand the risks that may have any effect on these indicators.Based on the identified risks, strategic actions can be developed and performed in order to control the operation in accordance with the set objectives.It should be noted that the challenge is not a single intervention; continuous (regular) control is necessary.The process is summarised in Figure 2.
Figure2. Strategic control process Source: created by the authors
The details of risk evaluation are presented in Figure 3.
Figure 3. Process of risk analysis of strategic indicators Source: created by the authors
The content of the risk analysis process using the methodology in the previous sectionis as follows.A presumption is that the strategic indicators are available.
The initial step is to organise the indicators into homogeneous groups.The aim of grouping is to find the strategic issues that may be influenced by similar risk factors.Homogenous groups must be the results ofteamwork.The experts of the university perform a workshop that allows the proper teamwork.In the beginning external experts were involved in order to learn the methodology and keep focus on the content.Of course, the list of indicators in a group is not set in stone, the relevant strategic indicators may be changed.Review of the groups must be performed by the internal experts regularly, at least annually.
The next step is to designate the risk factors of the strategic indicators within each group.There are various sources that can be used for supporting the assignments.In addition to expert estimation, historical data and literature sources shouldbe taken into consideration.Establishing a comprehensive risk database will significantly increase the effectiveness of this step.Proper designation of risks factor is essential because the probability and the impact can only be assessedproperlyin this way.If a risk factor is assigned to more than one strategic indicator, it must be evaluated separately by each indicator because the impacts may be different.Table 1 shows an example of assignment.
Risk factor Description of the risk factor
Rate of students admitted to the University of Miskolc compared to all students gaining admission in the recruitment process of the given academic year Legal policy changes / Changes in government funding quota Changes in the government funding quota will influence the number of students admitted to the University of Miskolc compared to all students admitted in the country.Natural sciences and engineering studies have a higher quota, while the quota of law and economic studies is reduced.Minimum limits of admission scores may be changed.
University's reputation
Improving the university's reputation may attract potential students, so this can influence the number of applications (Rate of students admitted to the University of Miskolc compared to all students gaining admission in the recruitment process of the given academic year) The task of risk factor evaluation is supported by a scenario analysis performed in a workshop.The experts of the University of Miskolc reviewed the factors one by one.Possible impacts are summarised in the description of the risk factor based on the methodology described above.It must be noted that there is a simplification in the process: interaction between the risk factors is out of scope.It is hypothesised that the risk factors are independent from each other.We know that this is not always true, but the lack of historical data does not allow an estimation of interrelations with an acceptable level of reliability.The high failure ratio of the estimation does not help the proper evaluation but needs huge efforts.Table 2 shows an example of scenario analysis.2.
Increasing demand for bachelor courses 15 5
There is a competition for places in technical faculties, especially the Faculty of Mechanical Engineering and Informatics,alsoin the Faculty of Economics.Demand for courses of the Faculty of Law is influenced by the distracting effect of the University of Debrecen.Health care courses havea competition for places as well.Based on the data of felvi.huapprox.50−60% of these applications are firstplace applications, so this tendency may further increase.
3.
Reduced demand for bachelor courses
-5
Based on the forecasts there is a low probability of decreasing demand for the bachelor courses.There is a decline to be seen in the number of applicants in comparison between the years 2011 (8,003) and 2014 (4,937), especially in the number of applicants with government funding (from 2,149 to 1,899), so a general decline may be indicated if the number of fee-paying students will not compensate.
Calculating expected values and standard deviation based on the results of scenario analysis allows to find the risks being treated.Changes in the government funding quota will influence the number of students admitted to the University of Miskolc compared to all students admitted in the country.Natural sciences and engineering studies have a higher quota, while the quota of law and economic studies is reduced.Minimum limits of admission scores may be changed.
23.5 25.70 After making the scenario analysis the experts chose the risk factors which are critical to manage in order to achieve the university's strategic goals through meeting the target values of strategic indicators.The methodology requires defining tolerances for the expected values and dispersions calculated during the scenario analysis.Critical risk factors mustbe managed.A risk factor is considered to be critical if it exceeds any of these tolerances.The university experts use a tolerance limit 10% for the expected value and 200% for the relative deviation (standard deviation divided by the expected value of difference).Table 4 shows examples of critical risk factors.
Critical risk factor Description of the risk factor
Rate of students admitted to the University of Miskolc compared to all students gaining admission in the recruitment process of the given academic year Legal policy changes / Changes in government funding quota Changes in the government funding quota will influence the number of students admitted to the University of Miskolc compared to all students admitted in the country.
Natural sciences and engineering studies have a higher quota, while the quota of law and economic studies is reduced.Minimum limits of admission scores may be changed.
University's reputation
Improving the university's reputation may attractpotential students, so this can influence the number of applications.
(Rate of students admitted to the University of Miskolc compared to all students gaining admission in the recruitment process of the given academic year.) The next step of risk evaluation is elaboration of (strategic) risk management actions.Besides the description of the actions, this shouldinclude both the implementation deadline and the designation of the individual responsibilities.Planning of actions is also performed as a part of the risk management workshop.A proposed risk management action is shown in Table 5.In addition tothe numerical analysis and the content of the tables above, an evaluation summary is needed that explains the main results and the relationship between the particularparts and figures.An important goal of this task is the consolidation of the critical risks.In practice, consolidation means the determination of core risk factors, i.e. risk factors that are different from each other in content.A prerequisite for being a core risk factor is that it is assigned to at leastone strategic indicator by the university experts.Consolidation shouldalso: • summarise the risk factors byflagging the indicators theyare assigned to, • flag the critical risk factors by strategic indicators.
Eventually, the flagging designates the risks that must be managed.Table 6 shows an example of a consolidated list.It is necessary to consolidate the risk management action based on the consolidation of risk factors.The results shall consider the suggestions (strategic risk management action plans) of the university experts.The output of consolidation is a report for decision makers that includes in a comprehensive way the followings (an example is shown in Table 7):
Table 6 Consolidated list of critical risk factors
• consolidated risk management actions, • personal and/or department level responsibilities, • expected deadlines for performing the actions.
Results of consolidation should be uploaded to the databases of the university'sinformation management system.As a result of scenario analysis, annual information is available about the expected values and standard deviation of difference from target values of strategic indicators.This is followed by a comprehensive evaluation of each risk factor, including the calculation of a total deviation from the target values.These will allow us to calculate adjusted target values of the strategic indicators.Target values before the risk analysis process shouldbe adjusted by the calculated risk characteristics (expected values and standard deviation).Ultimately, the adjusted target values show the deviance from the institutional development plan.Higher differences in the values show the higher importance of risk management actions in order to enhance the possibility of achieving the original target value.Adjusted target values should also be uploaded to the databases of the university's management information system.
CONCLUSIONS
Systematic risk management supports institutional decision making.The systematic approach requires both a clear methodology of calculations and a proper workflow adapted to the organisational characteristics.The paper summarises the solution of the University of Miskolc.The main experiences and conclusions based on the pilot run of the system are the following: • Establishing risk identification and analysis as a supporting tool of strategic planning helps to understand the influencing factors of strategic objectives and to work out proper actions in order to increase the chance of fulfilling these objectives.• Realisation of the expected benefits is only achievable by performing the risk management actions, so attention must be given to assigning granting proper authorityand responsibilities.• It is important to upload the results to the databases of the management information system that require the necessary integration development actions (including changes in regulations and technicalprogramming development).• Deep and intensive risk analysis makes the updating processes within the planning periodeasier.Due to the continuous changes in internal and external environment of the university it is necessary the modelling of the influencing factors that is easier in case of the proper initial analysis.• Detailed justification and (if achievable) data support forthe results of risk analysis enhances itscreditability and acceptance.The pilot evaluation is being carried out as a part of the TÁMOP-4.1.1.C-12/1/KONV-2012-0001 project.Long-term utilisation requires the organisational integration of the process and the methodological elements, including harmonisation with the management information system and an up to date risk management regulation.Furthermore, decision makers must recognise the benefits and accept the results.
A further challenge insystem development is improving the accuracy of the expert estimation.We plan to carry out action research about further strategic influencing factors of the strategic position of the University of Miskolc.Including more factors in the risk analysis will allow us to draw up a more sophisticated map of risks and to evaluate the expected effects of the factors in a more detailed way.Our goal is to build up a structure of factors that is ready for running a Monte-Carlo simulation, which could give more accurate results.
Figure 1 .
Figure 1.The suggested risk management process Source: created by István Fekete studying inagiven course at the University of Miskolc compared to studentsinthe course nationwide Changes in the number of partners involved in practical education Utilisation of R&D&I infrastructure Level of R&D&I orders Number of PhD students Number of Hungarianand international publications and the ratio of them compared to the number of employees in education/research jobs Number of scientific publications and four-year target values of increment by institutional (faculty) level Number of Hungarianand international monographs and professional books and the ratio of them compared to the number of employees in education/research jobs
Table 2
Example of scenario analysis Table 3 summarises a sample result.
Table 3
Sample results of scenario analysis
Table 5 A
proposed (strategic) risk management action
Table 7 Consolidated risk management action Risk management action Indicator / risk factor Person in charge Deadline
Table 8 shows examples of adjusted target values.
Table 8
Strategic target values adjusted by the results of risk analysis | 6,581.4 | 2015-01-01T00:00:00.000 | [
"Business",
"Economics"
] |
The role of domain wall junctions in Carter's pentahedral model
The role of domain wall junctions in Carter's pentahedral model is investigated both analytically and numerically. We perform, for the first time, field theory simulations of such model with various initial conditions. We confirm that there are very specific realizations of Carter's model corresponding to square lattice configurations with X-type junctions which could be stable. However, we show that more realistic realizations, consistent with causality constraints, do lead to a scaling domain wall network with Y-type junctions. We determine the network properties and discuss the corresponding cosmological implications, in particular for dark energy.
Introduction
There is now overwhelming observational evidence that our Universe is presently undergoing an era of accelerated expansion [1,2]. In the context of general relativity such period can only be explained if the universe is permeated with an exotic dark energy component violating the strong energy condition. The dark energy is often described by a nearly homogeneous scalar field minimally coupled to the other matter fields. If the scalar field is static then it is equivalent to a cosmological constant but the more interesting case is definitely that of a dynamical scalar field [3].
Nevertheless, the dark energy role is not necessarily played by a (nearly) homogeneous field. In fact, it has been claimed that a frozen domain wall network could naturally explain the observed acceleration of the universe [4]. However, this possibility has been seriously challenged by recent observational results which favor a dark energy equation of state parameter, w, very close to −1 (note that w = −2/3 + v 2 −2/3 for domain walls, where v is the root mean square velocity). Furthermore, although it is possible to build (by hand) stable domain wall lattices there is strong analytical and numerical evidence that no such lattices will ever emerge from realistic phase transitions [5][6][7][8][9]. This provided strong support for a no-frustration conjecture invalidating domain walls as a viable dark energy candidate.
Still, it has been argued that winding domain wall models with X-type junctions could give rise to static lattice type configurations thus accounting for at least a fraction of the dark energy density [10][11][12][13]. Carter's pentahedral model [10,13] has been constructed as an example of a model having an odd number of vacuum configurations giving rise to an even type system through the formation of X-type junctions. However, in [5,6] the claim that Carter's pentahedral model would form X-type junctions has been challenged and it was argued that Y-type junctions would be formed instead. In this Letter we definitely settle this question.
Throughout the Letter we use units in which c =h = m = 1, where the mass scale, m, can be chosen arbitrarily.
The model
Consider the action where Φ and Ψ are complex scalar fields, V (Φ, Ψ ) is the scalar field potential, Open access under CC BY license.
Consider a planar static domain wall perpendicular to the z direction and assume that θ = θ(z) and cos(χ /2) = 0. The only non-trivial equation of motion is given by 1 10 dθ dz 2 = 2V ε cos 2 (θ/2), (9) or equivalently dθ cos(θ/2) which has the solution for a domain wall located at z = 0. Using Eq. (11) it is straightfor- 20V ε is the domain wall thickness. In the following we shall drop the ∓ sign. It will be sufficient to realize that for each solution θ = θ(z), there will also be another solution given by θ = θ(−z).
The energy density, ρ, associated with the domain wall is where Eq. (9) was used to obtain the final result. Finally the domain wall tension associated with a simple domain wall trajectory is given by In the case of a compound wall, in which both θ and χ vary along the wall, the corresponding tension is twice that of a simple domain wall In Carter's pentahedral model there is a simple domain wall trajectory between any of the five minima of the potential, with either constant θ or χ . For constant θ there is a simple domain wall trajectory between any two adjacent minima, with phases (φ, ψ), in the sequence Here φ and ψ vary in five successive steps of ∓2π /5 and ±4π /5 respectively thus maintaining θ = constant (note that the phases are defined up to a multiple of 2π ). These trajectories are illustrated on the lower panel of Fig. 1 by the red path (darker grey in black and white) on the surface of a torus with line element If χ is a constant then there is a simple domain wall trajectory between any two adjacent minima in the sequence In this case φ and ψ vary by successive steps of ±4π /5 and ±2π /5 respectively thus maintaining χ = constant. These trajectories are illustrated by the green path (lighter grey in black and white) on the lower panel of Fig. 1.
There are Y-type junctions connecting three simple domain walls trajectories. Surrounding a Y-type junction there are two domain walls with constant θ (or χ ) and another one with constant χ (or θ ). A simple example is the configuration which corresponds to two domain walls with constant χ (1-4 and 1-3) and one with constant θ (3)(4). The above trajectory is illustrated by the red path on the left upper panel of Fig. 1. The overall change in the phase φ is equal to 2π . In fact there must always be jump of 2π in either φ or ψ around a Y-type junction.
Another example corresponding to a Y-type junction is the configuration where two domain walls with constant θ (1-2 and 1-5) and one with constant χ (2)(3)(4)(5) meet. In this case it is the overall change in ψ that is equal to 2π . This trajectory is illustrated by the red path on the right upper panel of Fig. 1.
What about X-type junctions? Is there a trajectory in which φ and ψ are continuous around a X-type junction? The answer is This trajectory is illustrated by the red path on the left middle panel of Fig. 1. As correctly pointed out in [10,13], Carter's pentahedral model allows for square domain wall lattice solutions that are stable, if ε is sufficiently small. However, as we will show in the following section, such lattices are never generated from realistic initial conditions. Around a X-type junction where three walls with constant θ (or χ ) meet one wall with constant χ (or θ ), both φ and ψ must change by a factor of 2π . Consider the following example which is illustrated by the red path on the right middle panel of In this case the energy of the junction associated with the presence of a string is greater, by a factor of 2, compared to Y-type junctions. Hence, the string does nothing for the stability of the junction. Such X-type junction would be unstable and decay into a pair of Y-type junctions, even if ε is small (the green line represents a possible decay channel). This is the reason why, in the context of Carter's pentahedral model, Y-type junctions are preferred, with the exception of very specific realizations.
Simulations
In order to test our analytical expectations we will now present the results of a few 256 2 simulations in two spatial dimensions. Although these simulations are relatively small in size and dynamical range, they are more than enough to support our analysis. In all the simulations we use the PRS algorithm [14] modifying the domain wall thickness in order to ensure a fixed comoving resolution. More details about the numerical code can be found in [9] and references therein. Fig. 2 shows four snapshots of a matter era simulation of a realization of Carter's pentahedral model (ε = 0.2) with random initial conditions. At each grid point, one of the minima was randomly assigned, all the minima having equal probability. The cosmic time t, is increasing from left to right and top to bottom (the horizon is approximately 1/10, 1/8, 1/6 and 1/4 of the box size respectively). The simulations show that Y-type junctions are much more frequent than X-type ones. This is not surprising since the probability that the combination of two Y-type junctions will give rise to one stable X-type can be easily calculated and is equal to 2/9, assuming that the corresponding minima are randomly chosen with equal probability, subject to the constraint the same minima cannot be assigned to both sides of a domain wall. On the other hand, the probability that the collapse of a square domain with Y-type junctions at the vertices will give rise to a stable X-type junction is equal to 1/21, again assuming a random configuration. Furthermore, this does not take into consideration that stable X-type junctions may break into two Y-type junctions if enough energy is available. However, some rare stable X-type junctions can still be identified in the simulations. Fig. 3 is similar to Fig. 2 except that now ε = 0.05. As a consequence, the energy density inside the domain walls is reduced by a factor of 4 while their thickness increases by a factor of 2. On the other hand, the strings remain roughly the same. Of course, in the ε → 0 limit the dynamics would be completely dominated by the strings. However, in this limit the thickness of the domain walls becomes very large (δ w ∝ ε −1/2 ) and the domain wall network would no longer be well defined. In any case, this would not help domain walls as a possible dark energy candidate since, in that case, the contribution of the junctions to the energy density would be the dominant one, thus leading to an equation of state parameter significantly greater than −2/3. Moreover, the strings have a small impact on the overall dynamics as long as the average domain wall energy density dominates over that associated with the junctions. This happens for σ L μ ∼ 1 or equivalently δ w /L 1, where L is the characteristic scale of the network. Such condition is always verified as long as the thickness of the domain walls is much smaller their typical curvature scale. In fact, the string energy per unit length (μ ∼ 1) of a stable Y-type junction is of the same order as the energy per unit length of a stable X-type junction (∼V ε δ 2 w ∼ 1), which means that X-type junctions are configurations of delicate equilibrium, susceptible to decay in the presence of relatively small perturbations. Hence, even the rare stable X-type junctions which appear in the simulations would probably not be there if the domain wall thickness had not to be artificially enlarged in order to ensure that the domain walls were resolved by the numerical code. Fig. 4 shows the configuration space distribution for the last time step of the simulation in Fig. 2 Fig. 6 shows the evolution of a hand-made periodic square lattice realization of Carter's pentahedral model with ε = 0.2. The initial configuration of minima was chosen to allow for X-type junctions corresponding to a continuous φ and ψ , and X-type junctions around which both φ and ψ change by a factor of 2π . As expected, the simulations show that the former are stable while the later are unstable and decay into two stable Y-type junctions. It is also possible to choose the initial conditions in a way that a square lattice with only X-type junctions is formed and we have verified that such a configuration is stable, as claimed by Carter [10,13]. However, one should bear in mind that it corresponds to a very specific set of initial conditions which would violate causality, if they were to extend over scales larger than the particle horizon.
Conclusions
In this Letter we confirmed that there are very special realizations of Carter's pentahedral model, corresponding to square lattice configurations with X-type junctions, which could be stable. However, we have shown that more realistic realizations of Carter's pentahedral model, such as those with random initial conditions, give rise to a network with Y-type junctions. This leads to a domain wall network whose properties are virtually indistinguishable from those of a specific realization of the ideal class of models with 4 real scalar fields (and 5 minima), with similar initial conditions. The ideal class of models has been studied in detail in [7,9] where a compelling evidence for a gradual approach to scaling, with L ∝ t, was found both in the radiation and matter eras. As a result, and in spite of its very interesting topological properties, Carter's pentahedral model does not naturally lead to a frustrated network with v ∼ 0 and L t, a necessary condition for domain walls to provide a contribution to the dark energy budget. There are other models which allow for X-type junctions (see for example, [15,16]) but they also do not lead to a frozen network, starting from random initial conditions [7,9]. | 3,050.6 | 2009-07-24T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Fed and ECB: which is informative in determining the DCC between bitcoin and energy commodities?
Purpose – This paper provides an important perspective to the predictive capacity of Fed and European Central Bank (ECB) meeting dates and production announcements for the dynamic conditional correlation (DCC)betweenBitcoinandenergycommoditiesreturnsandvolatilitiesduringtheperiodfromAugust11,2015toMarch31,2018. Design/methodology/approach – To assess empirically the unanticipated component of the US and ECB monetary policy, the authors pursue the Kuttner ’ s approach and use the federal funds futures and the ECB funds futures to assess the surprise component. The authors use the approach of DCC as introduced by Engle (2002) during the period from August 11, 2015 to March 31, 2018. Findings – The authors ’ results suggest strong significant DCCs between Bitcoin and energy commodity marketsifmonetarypolicysurprisesare incorporated invariance.Theseresultsconfirmedthe financialization of Bitcoin and commodity energy markets. Finally, the DCC between Bitcoin and energy commodity markets appears to respond considerably more in the case of Fed surprises than ECB surprises. Originality/value – This study is a crucial topic for policymakers and portfolio risk managers.
Introduction
The role of a central bank is not naturally and historically dedicated to creditworthiness, although some economists argue that it is not possible to avoid itsee Goodhart (1999). An IMF report (2015) notes the existing challenge to the use of monetary policy for financial stability and the need for appropriate prudential policies. Prudential policies then serve financial stability while monetary policy remains limited to price stability. However, this report also indicates that knowledge about the relationship between monetary policy and financial stability is evolving and that circumstances are changing. In this context, acting with monetary policy on components comprising solvency issues therefore leads to questioning the articulation between monetary policy and the macroprudential policy developed since the early 2000s. Politics has made the choice up to now to implement a macroprudential policy to complement the microprudential tool of central banks or delegated agencies rather than using monetary policy.
Due to growing popularity and significance of Bitcoin, practitioners, investors and researchers have lately began to evaluate Bitcoin from the viewpoint of finance and economics. Also, Rogojanu and Badea (2014) investigate the benefits and weaknesses of Bitcoin and assess it through additional complementary monetary structures. Brandvold et al. (2015) concentrate on the impacts of Bitcoin trades to price detection. Ciaian et al. (2016) analyze Bitcoin price structure by concentrating on market influences of supply/demand and numerical currencies components. Few surveys have appeared from the viewpoint that Bitcoin represents an alternate to traditional currencies in periods of low confidence, such as through the international financial crisis in 2007, therefore suggesting Bitcoin as numerical gold (Rogojanu and Badea, 2014). Baur and Lucey (2010) conclude that Bitcoin is a hybrid among important metals and traditional currencies. They furthermore underline its responsibility as a beneficial diversifier and an investment.
The sensitivity of asset prices to monetary policy has proven to be a dominant theme of the past year. Driven by low policy rates and quantitative easing, long-term yields on major bond markets had fallen to unprecedented lows in 2012. Since then, markets have become very sensitive to any signs of a reversal of these exceptional conditions. Concerns over the stance of US monetary policy played a key role -as demonstrated by the episode of bond market turmoil in mid-2013 and other key events of the period under review. However, monetary policy has also had an impact on asset prices and, more generally, on investor behavior. The events of the past year have shown thatby its influence on risk perception and the attitude of market participants in this regardmonetary policy can have a powerful effect on financial conditions, as evidenced by risk premiums and financing conditions. In other words, the effects of the risk-taking channel were widely manifested throughout the period (Rajan, 2006). The extraordinary influence exerted by the central banks on the world financial centers was manifested in a very visible way on the main bond markets: the slope of the yield curve was particularly sensitive to all the announcements and changes in policy expectations to come up. While short-term rates remain largely anchored by the low-key rates, medium-term rates react to the forward-looking orientations, and long-term rates were dominated by asset purchases, long-term expectations and the perceived credibility of the central bank. When the Federal Reserve (Fed)the first major central bank to acthinted in mid-2013 that it would slow down asset purchases, long-term bonds suffered heavy losses. Even though bond prices fell less than during the massive decommitments of 1994 and 2003, the overall losses in market value were heavier this time because the stock of treasury securities was much higher.
JCMS 4,1
Unconventional monetary policy measures and forward-looking guidelines played a decisive role in the communication of central banks. After Fed expressed to leave federal funds rate low, even after asset programs ended, investors downgraded medium-term expectations for short rates, and the dispersion of opinions has diminished. At the same time, there was a broader consensus among market participants that long-term rates would eventually increase in the medium term. Bernanke and Kuttner (2005) give proof that monetary policy news takes the lead to reduce stock market returns. Basistha and Kurov (2008) conclude that the response of stock market returns to surprises regarding unexpected fluctuations in US monetary policy varies on the condition of the business phase and on credit market conditions. They find a considerably greater reaction in downturn and in difficult credit market situations. Ehrmann and Fratzscher (2009) conclude that the returns correlated with 50 equity markets internationally clearly react to US monetary policy announcements. Hayo et al. (2012) give persuasive confirmation that US monetary policy greatly influences the returns of 17 emerging equity markets during the period from 1998 to 2009. Hayo et al. (2010) show that US target rate adjustments and the Federal Open Market Committee (FOMC) statement have a considerable effect on European and Pacific equity market returns. Wongswan (2009) investigates 15 foreign equity indexes in Asia, Europe and Latin America and concludes that they considerably respond to US monetary policy news at brief time possibilities. A remarkable exclusion is by Bailey (1990), who finds a low suggestion of foreign equity market answers to Federal funds rate announcements.
In recent times, energy commodity futures have appeared as an enormously prevalent asset portfolio for investors and fund managers (Andreasson et al., 2016). The speed in the financialization of energy commodity markets has also significantly increased the numeral of market members and contributors. In addition to remaining employed for hedging and speculative reasons, energy commodity futures can similarly increase away the risk of differentiated stock/bond portfolios, principally during financial and economic recessions. Subsequently, understanding of the elements that explain energy futures markets is possible to make important news for investors and managers.
Among the different energy commodities, crude oil maybe the majority considerable given its vital liability in the globe economy comparative to more energy commodities, principally in requirements of causing crisis (Hamilton, 1983(Hamilton, , 2003(Hamilton, , 2009(Hamilton, , 2013. Additionally, crude oil is essential for transportation, industrial and agricultural segments, whether employed as feedstock in production or as a surface fuel in utilization (Mensi et al., 2014b).
At present, there have been only limited surveys on the influence of shock component in the inventory pronouncement on price change and volatility. Chang et al. (2009) employ analysts' predictions from Bloomberg to investigate the responses of intraday crude oil futures returns to unanticipated portfolio fluctuations. They obtain an instantaneous reply of crude oil returns to supply announcements. Besides, they maintained that the response is greater when the assessment was produced by analysts with forecast correctness in the earlier period. Gay et al. (2009) conclude that the unanticipated adjustments in Energy Information Administration (EIA) natural gas inventory accounts have a considerable influence on intraday futures returns directly following a given news. By applying a GARCH (generalized autoregressive conditional heteroskedasticity) models, Hui (2014) tries to measure the influence of the unexpected inventory fluctuations in the EIA statement on daily crude oil returns and volatility. Hui (2014) concludes that inventory shocks have negative influence on returns but recommends that there is no proof of impact on return volatility. Chiou-Wei et al. (2014) investigate the dynamics of US natural gas futures and spot prices across the weekly pronouncements by the EIA statements. Their empirical findings underline an opposite link among the unexpected inventory adjustments and changes in Fed and ECB futures prices. Besides, Chiou-Wei et al. (2014) show no proof of the influence of inventory surprises another than on the date when the EIA report is published.
In recent times, Ye and Karali (2016) utilize intraday data to examine the reply of crude oil returns and volatility to inventory releases by the American Petroleum Institute (API) and EIA during the period from August 2012 to December 2013. They find that inventory shocks in both API and EIA statements apply an instant inverse influence on returns and a positive effect on volatility.
In the same alignment, Halova et al. (2014) conclude at intraday data to examine the effect of the unexpected part in EIA's crude oil inventory statements on both return and volatility. They show that energy returns react further greatly to unexpected variations in inventory levels through the injection period than over the withdrawal period.
Furthermore, crude oil market volatilities are greatly established to spillover to additional commodity markets (Kang and Yoon, 2013;Kang et al., 2016Kang et al., , 2017Mensi et al., 2013Mensi et al., , 2014aMensi et al., , 2015Chebbi and Derbali, 2015, 2016a, 2016b, as well as financial markets (Balcilar and Ozdemir, 2013;Balcilar et al., 2015Balcilar et al., , 2017Balli et al., 2017;Berger and Uddin, 2016;Kang et al., 2016;Lahmiri et al., 2017;Mensi et al., 2014aMensi et al., , 2015Narayan and Gupta, 2015). Miao et al. (2018) study the impact of the unexpected part of weekly crude oil inventory in EIA statements on oil futures and options prices. Miao et al. (2018) conclude that prices clearly respond to the inventory shock on news day. Furthermore, they show that futures return considerably reduces with positive surprises and rises with negative surprises. Moreover, as Shrestha (2014) notes, one can predict price detection to appear mostly in the energy futures markets since futures prices respond to new pronouncement faster than spot prices recognized smaller transaction expenses and healthier ease of minor selling associated with energy futures agreements. Furthermore, it is believed that futures market volatilities create spot market volatilities for crude oil Kilian, 2014, 2015;Baumeister et al., , 2017. Consequently, defining the issues that drive the energy commodity markets is of main implication for both investors and policymakers, which is our objective for this paper via examination of the significance of surprises from Fed and ECB (European Central Bank) announcements and meeting dates.
This study is extremely directly related to the current literature on the reaction of the energy commodities (Crude Oil WTI (West Texas Intermediate), Gasoline RBOB (Reformulated Gasoline Blendstock for Oxygen Blending), Brent Oil, London Gas Oil, Natural Gas and Heating Oil) returns and volatilities to Fed and ECB events during the period from August 11, 2015 to March 31, 2018. In particular, it is commonly known that transaction movement can be influenced by the new information (Fed and ECB monetary policy events in our paper). Giving this new info is accessible in the financial market, investors respond through portfolio variations of their portfolios further intensively among energy commodities portfolio, which in turn starts to an expansion in trading capacity. As demonstrated by the well-known positive nexus among volatility and trading size (Andersen, 1996;Karpoff, 1987, among others), the growth in trading size might, sequentially, transform into greater volatility. Additional potential justification is presented by Ross (1989), which examines the growth in volatility in asset returns related to the information publication. Therefore, it is crucial to consider the ultimate nexus among monetary policy decisions and volatility of stock market returns, especially, energy commodity returns.
So, we examine in this paper the US and European monetary policy surprises as a potential determinant of the volatility of energy commodity returns is of key significant given a period of quick failure of the European markets and the principal role of US monetary policy movements on financial asset prices. In this study, we examine the time-varying relationships among strategic commodities covering sector of energy (Crude Oil WTI (West Texas Intermediate), Brent Oil, Gasoline RBOB (Reformulated Gasoline Blendstock for Oxygen JCMS 4,1 Blending), Heating Oil, London Gas Oil and Natural Gas) and Bitcoin, over the period from August 11, 2015 through March 31, 2018. For this purpose, we use the DCC-GARCH approach with incorporating the Fed and ECB monetary policy surprises.
Our empirical results in this paper confirm strong significant dynamic conditional correlations between Bitcoin and energy commodity markets if monetary policy surprises are incorporated in variance. These results proved the financialization of Bitcoin and commodity markets. Also, the results estimated and more specifically those related to the level of the persistence of volatility are sensitive to the presence of monetary policy surprises into the DCC-GARCH (1,1) model. The conditional correlations between Bitcoin and energy commodity markets appear to respond considerably more in the case of Fed surprises than the ECB surprises.
The rest of our paper is organized as follows. Section 2 describes the econometric methodology utilized in this study. Section 3 defines the data employed in this study. Section 4 is devoted to the empirical results of the impact of US and European monetary policy on a sample of energy commodities market. Section 5 concludes. Finally, Section 6 presents policy implications of our paper.
Econometric methodology
The methodology employed in this study, which tries to measure Bitcoin and energy commodities returns and volatilities responses to monetary policy surprises announced by the Fed and ECB, is based on DCC multivariate model as recommended by Engle (2002).
The DCC model has the elasticity of univariate GARCH models but does not tolerate from the "curse of dimensionality" of multivariate GARCH models. The estimation of GARCH-DCC models involves two stages. We estimate, in the first stage, the conditional mean return and variance of each variable used in this study. In the second stage, we utilize the consistent regression residuals acquired in the first stage to assess conditional correlations between Bitcoin and energy commodities with Fed and ECB surprise monetary policy news.
To attain the reaction of energy commodities returns correlated with Bitcoin to the surprise component, we employ the following model: the GARCH (1,1) model is taken by the following equation: where, ω , α and β represent the parameters that want to be estimated. The conditional correlation matrix R t of the standardized disturbances ε t is provided by: The matrix R t is assessed as following: Where Q t is the time-varying covariance matrix of ε t and Q *−1 t is the inverted diagonal matrix along with the square root of the diagonal components of Q t . Remarking that Q *−1 t is equivalent to:
Fed and ECB
The DCC-GARCH (1,1) is provided by the following equation: with Q being the unconditional covariance of the standardized disturbances ε t . ω , α and β are the estimated parameters.
In this study, we provide to the literature by including an exogenous variable in the DCC-GARCH (1,1) model, which measures the Fed and ECB monetary policy surprise. Subsequently, the estimated model is given as follows: where S t implies the unexpected Fed and ECB surprise monetary policy announcements at time t. Based on Kuttner (2001), we assess the surprise as the scaled version of change in the oneday current-month futures rate at an event date (d defined as a meeting of the FOMC and ECB). Explicitly, the surprise factor for each target rate change by the FOMC is given by the following formula: where f d represents the current-month futures rate at the end of the announcement day d, f d−1 represents the current-month futures rate at the end of the announcement day (d-1) and D is the number of days in the month.
Data
The data used in this paper contain daily observations on returns and conditional volatilities of energy commodities and Bitcoin. In line to examine the impact of the policy monetary news, we concentrated on the expected and surprise factors in Federal funds target rate changes and ECB target rate changes. Our data sample covers the period from August 11, 2015 to March 31, 2018. We notice that all stock price indices of energy commodities and Bitcoin are transmuted into logarithm form. We identify logarithmic return as r t ¼ lnðp t =p t−1 Þ, where P t is the price index at time t and where P t−1 is the price index at time t-1. Table 1 summarizes main statistical features for the daily returns of energy commodities and Bitcoin. We can show that the lowest possible average of return is 0.000167 for NATURAL GAS but the greatest average is 0.009484 for CRUDE OIL WTI followed by BITCOIN with a value of 0.003414.
Bitcoin and energy commodities
For the volatility of the daily return series of energy commodities and Bitcoin, as assessed by the standard deviation, we can show that London GAS OIL exhibits a daily volatility of 1.215990 versus NATURAL GAS with a value of 0.499397. The lowest volatility is for BITCOIN (0.040825).
The coefficients of skewness are negative for BITCOIN, CRUDE OIL WTI and HEATING OIL variables. The negative sign of the statistical skewness means that the distribution of the different variables is asymmetrical left. The existence of the same sign for these variables justifies the existence of a minimum correlation between them. For the case of BRENT OIL, GASOLINE RBOB, London GAS OIL and NATURAL GAS, skewness value is positive indicating a distribution shifted to the right.
We find that the values of the kurtosis are all greater than 0. Then, we discuss about leptokurtic distribution. The notion of leptokurticity is widely employed in the financial The positive sign of Jarque-Bera statistic implies that we can reject the null hypothesis of normal distribution of the variables utilized in our paper. Furthermore, the high-level value of Jarque-Bera statistic indicates that the series is not normally distributed.
The values of skewness (asymmetry) and kurtosis (flatness) for the different variables used in our paper indicate that the distributions of output are not normally distributed. This is suggested by the test of Jarque-Bera, which rejects the null assumption of normality of the time series of the outputs to a threshold of 1%.
Also, Table 2 summarizes the main statistical features for the conditional volatility of the used series. We can find that on average the high value is for CRUDE OIL WTI (0.009484) followed by BITCOIN (0.003414) and GASOLINE RBOB (0.000959).
The coefficients of skewness are all positive except for the London GAS OIL variable. The positive sign of the statistical skewness means that the distribution of the different variables is asymmetrical right. The existence of the same sign for these variables justifies the existence of a minimum correlation between them. For the case of London GAS OIL, skewness value is negative indicating a distribution shifted to the left.
Then, we find that the values of the kurtosis are all greater than 0. Then, we talk about leptokurtic distribution.
The positive estimate of Jarque-Bera statistic implies that we can reject the null hypothesis of normal distribution of the variables used in our study. Furthermore, the high value of Jarque-Bera statistic signifies that the series is not normally distributed.
In Figures 1-7, we expose the evolution energy commodities and Bitcoin return series. It can be seen that the used series present some breaks in their return evolutions.
In Figures 8-14, we present the evolution energy commodities and Bitcoin conditional volatilities series. It can be seen that the used variables attain their maximum in the present some breaks in their conditional volatility evolutions mainly in the end of the period of study. Table 3 (all values are measured in basis points). It is curious to notice that the standard deviation of the Fed policy action is greater than those of Fed surprise. However, it is interesting to observe that the standard deviation of the ECB policy evolution is inferior to those of ECB surprise.
Monetary policy announcements
The coefficients of skewness are all negative. The negative sign of the statistical skewness means that the distribution of the different variables is asymmetrical left. The existence of the same sign for these variables justifies the existence of a minimum correlation between them. Then, we find that the values of the kurtosis are all greater than 0. Then, we talk about leptokurtic distribution.
The estimate value of Jarque-Bera statistic implies that we can reject the null hypothesis of normal distribution of the variables used in our study. Furthermore, the high value of Jarque-Bera statistic signifies that the series is not normally distributed. (2001), which has been popular in the academic literature. More specifically, we employ the changes in the Federal funds futures rates after the FOMC meetings and ECB funds futures rates after the ECB meetings. Table 4 reports the descriptive statistics for the estimated dynamic conditional correlation between Bitcoin and energy commodities in presence of Fed surprises. From this table, we can find that at maximum the higher dynamic conditional correlation is between Bitcoin and CRUDE OIL WTI (0.974319) and between Bitcoin and NATURAL GAS (0.970986). This result implies the importance of these two commodities in the financial markets. Also, this finding indicates the significance of the dynamic conditional correlation between Bitcoin and energy commodities mainly in the presence of Fed surprises. In this case, we can observe the importance of the responsibility of US monetary policy in financial markets especially, for the energy commodity indices volatilities.
Source(s): Elaborated by authors
However, Table 5 summarizes the descriptive statistics for the estimated dynamic conditional correlation between Bitcoin and energy commodities in the presence of ECB surprises. From this table, we can show that at maximum the higher dynamic conditional correlation is between Bitcoin and BRENT OIL (0.939158) and between Bitcoin and HEATING OIL (0.935689). This result suggests the importance of these two commodities in the financial markets. Also, this conclusion reveals the significance of the dynamic conditional correlation between Bitcoin and energy commodities mainly in the presence of ECB surprises. Additionally, from Table 4 and Table 5, we can conclude that the Fed surprises are more important than the ECB surprises in operation of financial markets. Figures 15-20 show the evolution of the dynamic conditional correlation between Bitcoin and energy commodities in the presence of Fed surprises and ECB surprises estimated by Kuttner (2001) and define the mentioned surprises by applying the one-day change in the current-month futures rate. Statistical significance at the 1% is denoted by *. Target rate changes and surprises are calculated in basis points. Volatility and returns are measured in percent Table 3.
Descriptive statistics for US and ECB monetary policy data
Fed and ECB DCC-GARCH (1,1) model. From these figures, we can observe that the correlation between Bitcoin and energy commodities in the presence of Fed surprises is more important and significant than those in the presence of ECB surprises. These findings confirm the conclusions shown in Table 4 and Table 5. Also, we can conclude that the dynamic conditional correlation between Bitcoin and energy commodities in the presence of Fed surprises contains more important peaks (in positive and in negative) than those issued from nexus between Bitcoin and energy commodities in the presence of ECB surprises.
In addition, and looking at the daily period, we can prove that surprise components in Federal funds target rate changes have played a crucial role in the developments of major energy commodities volatilities. This finding is not surprising. One potential justification is that given the essential effect of US economy on the global economy, the news regarding Fed and ECB adjustments in US monetary policy may significantly influence foreign economic fundamentals and thus the volatility of energy markets. Then, the lowest impact of ECB monetary policy is justified by the importance of the US strategies and the US investors to dominate the international financial markets and the global economy. More specifically, in all cases, the Fed monetary policy surprises have a significant impact on major energy commodities volatilities than the European monetary policy surprises. Table 6 reports the estimation results of dynamic conditional correlation GARCH (1,1) between Bitcoin and energy commodities in the presence of Fed surprises and ECB surprises. Some interesting evidences appear from this estimation. First, we can observe that Fed surprises and ECB surprises affect the dynamic conditional correlation between Bitcoin and energy commodities similarly. This negative sign indicates that US and European monetary policies and shocks drop the mean level of volatility. According to the impact of US and European monetary policies on the correlation between Bitcoin and selected energy commodities in this study, the results reported in Table 4 reveal that 1% raise in the surprise of FOMC monetary policy reasons a decline of roughly 0.0534862% in the correlation between Bitcoin and London GAS OIL returns, and respectively, 0.0180978, 0.0154627, 0.0115703, 0.0097802 and 0.0056482 for the correlations associated with the returns of NATURAL GAS, CRUDE OIL WTI, GASOLINE RBOB, HEATING OIL and BRENT OIL.
Source(s): Elaborated by authors
Additionally, we can find that 1% raise in the surprise of ECB monetary policy reasons a decline of roughly 0.0802546% in the correlation between Bitcoin and HEATING OIL returns, and respectively, 0.0637925, 0.0488792, 0.0376008, 0.0196583 and 0.0188527 for the correlations associated with the returns of London GAS OIL, BRENT OIL, NATURAL GAS, CRUDE OIL WTI and GASOLINE RBOB. In this case, we can observe the important difference between FOMC monetary policy and European monetary policy and their impact on the correlation between Bitcoin and energy commodities returns.
In addition, there is a corroboration that the sum of the volatility coefficients (αþβ) is very close to unity, for the case of all correlation between Bitcoin and energy commodities indices as exposed demonstrating the higher persistence of volatility between the US and ECB monetary policies and commodity markets indices. There is one probable clarification, which finds that such persistence goes along with the financialization of stock market indices, Bitcoin and energy commodities (Creti et al., 2013;Derbali, 2015, 2016a). Our empirical findings emphasize the importance of using GARCH-DCC (1,1) in modeling the time-varying dynamic conditional correlations.
Conclusion
The links between Bitcoin and energy commodity markets have been examined by many researchers using various econometric methodologies. Several significant advancements have also been addressed in order to enrich the estimated findings. Among these improvements, we can notice the presence of monetary policy surprises in the volatility models. These monetary policy surprises in volatility could be caused by country-specific economic and financial events, regional and global economic and financial events (e.g. 2007-2008 financial crisis, European sovereign-debt crisis, 2011 Arab Spring, FOMC monetary policy, ECB monetary policy).
In this paper, we explore the time-varying relationships among strategic commodities covering sector of energy (Crude Oil WTI (West Texas Intermediate), Brent Oil, Gasoline RBOB (Reformulated Gasoline Blendstock for Oxygen Blending), Heating Oil, London Gas Oil and Natural Gas) and Bitcoin, over the period from August 11, 2015 through March 31, 2018.
For this purpose, we use the DCC-GARCH approach with incorporating the Fed and ECB monetary policy surprises. The empirical results in this paper suggest strong significant dynamic conditional correlations between Bitcoin and energy commodity markets if monetary policy surprises are incorporated in variance. These results proved the Table 6. Estimation results of dynamic conditional correlation GARCH (1,1) between Bitcoin and energy commodities financialization of Bitcoin and commodity markets. Also, the results estimated and more specifically those related to the level of the persistence of volatility are sensitive to the presence of monetary policy surprises into the DCC-GARCH (1,1) model. The conditional correlations between Bitcoin and energy commodity markets appear to respond considerably more in the case of Fed surprises than the ECB surprises. Finally, we assume that behavior of every commodity regarding Bitcoin fluctuations indicates the suggestion that commodities cannot be viewed as a homogeneous asset class.
Policy implications
Our paper is a crucial topic for policymakers and portfolio risk managers. From a policymaking viewpoint, having precise estimates of the volatility spillovers throughout markets is an important step in formulating successful monetary policy decisions. From the perspective of portfolio risk managers, our empirical findings are reliable with the idea of cross-market hedging. | 6,615 | 2020-08-14T00:00:00.000 | [
"Economics"
] |
Multispecies mass mortality of marine fauna linked to a toxic dinoflagellate bloom
Following heavy precipitation, we observed an intense algal bloom in the St. Lawrence Estuary (SLE) that coincided with an unusually high mortality of several species of marine fish, birds and mammals, including species designated at risk. The algal species was identified as Alexandrium tamarense and was determined to contain a potent mixture of paralytic shellfish toxins (PST). Significant levels of PST were found in the liver and/or gastrointestinal contents of several carcasses tested as well as in live planktivorous fish, molluscs and plankton samples collected during the bloom. This provided strong evidence for the trophic transfer of PST resulting in mortalities of multiple wildlife species. This conclusion was strengthened by the sequence of mortalities, which followed the drift of the bloom along the coast of the St. Lawrence Estuary. No other cause of mortality was identified in the majority of animals examined at necropsy. Reports of marine fauna presenting signs of neurological dysfunction were also supportive of exposure to these neurotoxins. The event reported here represents the first well-documented case of multispecies mass mortality of marine fish, birds and mammals linked to a PST-producing algal bloom.
Introduction
The paralytic shellfish toxins (PST) associated with paralytic shellfish poisoning (PSP) are potent neurotoxins produced by natural, environmentally-driven populations of some marine dinoflagellates, mainly by Alexandrium spp [1]. PST include saxitoxin (STX) and at least 21 derivatives that can be produced by the algae in various combinations and concentrations. Grazers, such as copepods, could play a key role in the increase of paralytic shellfish toxin production by dinoflagellates [2][3][4]. Some of these compounds are highly neurotoxic, acting as sodium channel-blocking agents restricting signal transmission between neurons, particularly a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 in mammals, birds and fish-with a toxic potency up to 100-fold greater than sodium cyanide [1]. Mass mortalities of farmed fish during episodic dinoflagellate blooms, the accumulation of PST in shellfish during these events and the resulting regulation for human health and their economic consequences, are well documented [5][6][7]. Reports of marine wildlife mortalities resulting from PST-producing algal blooms are, in contrast, unexpectedly rare [8,9] and often anecdotal, although it is suspected that several cases have been missed or unreported due to lack of adequate investigations [10,11]. As with other kinds of contaminants, the finding of microalgal toxins in waters or organisms alone does not necessarily imply a direct cause and effect relationship with fauna mortalities. Among the suspected important effects of PST exposure on wildlife is its potential role in the decline of endangered species such as the North Atlantic right whale Eubalaena glacialis [12,13] and the shortnose sturgeon Acipenser brevirostrum [14] populations inhabiting New England coastal waters.
Filter-feeding aquatic organisms, such as bivalves and zooplankton, appear relatively tolerant to PST and hence can accrue high levels of these toxins by directly feeding on algae [15,16]. This has been identified as a potential mechanism by which toxins are transferred through the food web to higher trophic levels. A classic case of transfer of PST up the food web and subsequent mortality at a higher trophic level involved humpback whales in New England, USA [8]. During a 5-week period beginning in late November 1987, 14 humpback whales, Megaptera novaeangliae, died in Cape Cod Bay after eating Atlantic mackerel, Scomber scombrus, containing PST.
In Eastern Canada, as well as in many regions of the globe, the toxic dinoflagellate Alexandrium tamarense has been identified as a major source of PST [17]. Like many other PST-producing algal species, this species presents a complex life cycle with a dormant phase in the sediments and a vegetative phase in the water column. After a period of dormancy, the sedimentary cysts germinate into vegetative cells that migrate to surface waters and can potentially initiate a new bloom when environmental (physical, chemical and biological) conditions are favorable. The St. Lawrence Estuary is well recognized for the high abundance of A. tamarense cysts in sediments and the recurrence of blooms of this species [18][19][20].
During the summer of 2008, an intense red tide of A. tamarense occurred in the St. Lawrence Estuary, which coincided with unprecedented mass mortalities of marine fish, birds and mammals. Here, we describe this multispecies mass mortality event and present a line of evidence showing that toxins produced by A. tamarense was responsible for mortalities. The present paper also provides unique information on trophic transfers, the accumulation and biotransformation of PSP toxins through the food web from plankton to marine mammals and birds, the transplacental transfer of toxins as well as neurological dysfunctions and clinical signs associated with PST intoxication. It is the first well-documented case of multispecies mass mortality of marine fish, birds and mammals linked to a PST-producing algal bloom.
Results and discussion
Following heavy precipitation (>130 mm within 4 days) and high river runoff (Fig 1), we observed an intense bloom of the toxic dinoflagellate Alexandrium tamarense in the St. Lawrence Estuary (SLE) in August 2008, which coincided with an unprecedented mass mortality of marine fish, birds and mammals (Fig 2). This typical association between A. tamarense and river runoff has been previously attributed to the beneficial effects of low salinity and high temperature on cellular growth rate, the riverine input of terrestrially-derived dissolved organic matter, nutrients and other materials such as humic substances that can serve as growth stimulants and/or increased water column stability that favours the proliferation and retention of cells [21]. However, the net population growth rate such as observed at the mouth of the Saguenay River during the high river runoff (0.75 d -1 , Fig 1) was well above the range of growth (0.3-0.5 d -1 ) measured in laboratory studies for this species [22][23][24]. In addition to growth, the sudden increase in A. tamarense cell concentrations associated with high river runoff could be also due to resuspension and germination of cysts from sediments. Indeed, the SLE is recognized for the presence of high concentrations of cysts in sediments, especially where depth is less than 100 m and close to the river plumes where cyst concentrations can exceed 200 cysts cm -3 [20]. Moreover, vertical migration and convergent circulation can combine to accumulate cells [25]. Thus, the sudden increase in A. tamarense cell concentrations associated with high river runoff and low salinity during the month of August 2008 may thus be the result of a combination of biological and physical processes. Based on helicopter surveys, this August bloom attained some 600 km 2 in size and remained in the SLE for two weeks before dissipating.
The A. tamarense bloom trajectory shown in Fig 2A was simulated using observed winds and modelled currents. The position of the bloom was forecasted daily during the event to guide sampling and to warn shellfish collectors and producers. Projections were validated from mollusc toxicity and/or the abundance of A. tamarense recorded at several coastal monitoring sites in the SLE and Gulf of St. Lawrence (GSL) (Fig 3).
Up to 80 x 10 3 cells L -1 of A. tamarense were first identified on August 4, 2008 at Tadoussac (Fig 1), at the confluence of the Saguenay and St. Lawrence rivers (Fig 2A, Site 1). Smaller localized increases in A. tamarense abundance and shellfish toxicity were also observed at the same time at Baie-Comeau, close to the mouth of the Manicouagan and Aux-Outardes rivers (Fig 3, Site 9). Prior to the August event, close examination of Fig 3 reveals peaks of Alexandrium abundance and of mollusc toxicity in June, well before the mass mortality event in August. In an advective environment such as the St. Lawrence Estuary, it is likely that this June bloom was dispersed and advected outside of the Estuary well before the August event (as confirmed by our initial modelling work; data not shown here). Accordingly, the mussel toxicity decreased rapidly after the peak in June reaching low values in July. Nevertheless, we can't exclude that this June event could have contributed, to a certain degree, to seeding of A. tamarense cells in the St. Lawrence Estuary and to toxicity of marine fauna prior the major bloom and mass mortalities events in August.
Focusing on the August event, our model simulation was initiated with a bloom of 600 km 2 area in size at Tadoussac on August 4 based on our helicopter survey observations. In our modelling, the bloom at Tadoussac then drifted towards Bic Island on August 5 and 6 (Fig 2A, Site 3), where its position was confirmed by mollusc toxicity (Fig 3). From August 6 to 14, constant winds from the north-east (Fig 1) confined the bloom to the coast. From August 14, winds were calm and the bloom drifted only by tides and freshwater outflow towards the GSL as shown by the arrows on Fig 2. On August 19, strong southerly winds (Fig 1) pushed the bloom offshore and outside of the SLE contributing to its dispersal north-eastward (Fig 2A) before reaching Mont-Louis (Site 5), where A. tamarense abundances remained weak (Fig 3).
Shellfish rapidly bioaccumulate PST and are often used as sentinel species in toxin monitoring programs. Live blue mussels (Mytilus edulis) collected during the event and tested by the AOAC mouse bioassay, revealed extremely high PST (Fig 3) especially near Bic Island (up to 1 ×10 4 μg STXeq 100 g −1 soft tissues, Fig 1D) where the bloom remained for several days ( [26]) and was the highest ever recorded in the SLE since shellfish monitoring began in 1942, attesting to the magnitude and persistence of the toxic A. tamarense bloom.
During the bloom, carcasses of 10 beluga (Delphinapterus leucas) (including 1 on September 10), 7 harbour porpoises (Phocoena phocoena) and 85 seals were reported. A dead juvenile fin whale (Balaenoptera physalus), a species designated at risk, was also observed drifting on September 17. The number of seal and beluga mortalities was well above the average for the month of August. For example, for the SLE beluga, an endangered population, the 25-year mean number of carcasses for the month of August is 2.6 (S1 Fig). Grey seals (Halichoerus grypus) were the most numerous marine mammal found dead, predominantly adult females (20/25 examined), 14 of which were pregnant when examined at necropsy. In addition, 76 reports of mortality events involving hundreds of fish-and mollusc-eating birds belonging to 15 different species, as well as fish and invertebrates, were documented during the month of August (Table 1, Fig 2). Additionally a total of 591 carcasses of birds were observed during a helicopter survey. Most birds (82%) found dead were larids, especially Black-legged Kittiwake (Rissa tridactyla, 59%). Other dead birds included Northern Gannet (Morus bassanus, 7%), Double-crested Cormorant (Phalacrocorax auritus, 4%), alcids (Black Guillemot, Cepphus grille, Common Murre, Uria aalge, and Razorbill, Alca torda, all 1.4%), loons (Common Loon, Gavia immer, and Red-throated Loon, Gavia stellata, each <1%), Common Eider (Somateria mollissima,<1%), and Northern Fulmar (Fulmarus glacialis,<1%). Sixteen birds were also observed moribund during this survey, 10 of which were larids. Bird mortalities were likely underestimated as numerous carcasses were obscured by algae or debris.
The sequence of faunal mortalities followed the drift of the bloom along the south coast of the SLE (Fig 2). Three days after the initiation of the bloom, about 100 dead or moribund birds (8 species) were first observed near Tadoussac (Fig 2A, Site 1) by Parks Canada staff. Over the subsequent 20 day period, the area with reported mortalities progressively expanded eastward along the Gaspé Peninsula following the drift of the bloom. Following dispersal of the bloom around August 21 near Sainte-Anne-des-Monts, mammal carcasses continued to be reported for a few more days but most were decomposed. Few carcasses were found outside the zone affected by the A. tamarense bloom. For example, some were reported near Baie-Comeau (Fig 2A, Site 9) coinciding with a localized small increase in A. tamarense abundance and shellfish toxicity (Fig 3). Five dead Northern Gannets were also found in the GSL from August 9 to 15. Adult gannets on GSL breeding colonies are feeding their young chicks in August and adults often forage for fish in the SLE to feed themselves and their chicks [27].
Pathological analyses were performed on a total of 74 birds of 13 species, 10 fish of 2 species, 21 grey seals, 4 harbour seals, 3 harbour porpoises, 2 beluga and 1 fin whale. Carcass preservation was evaluated as good (fresh/edible) in 32% of cases, fair (decomposed, but organs basically intact): 37%, and poor (advanced decomposition): 31%. Decomposition may have influenced detection of toxins as the tissues became friable and liquefied, exposing toxins to intense enzymatic and microbial degradation (see below). Most birds (67%) and almost all marine mammals (96%) examined at necropsy were in good nutritional condition and many with food in stomachs. Pathological analyses did not identify a cause of death in 85% of cases (i.e., gross lesions could not be attributed to an etiological agent such as a pathogen, traumatic injury or other specific disease process) ( Table 1). Gross lesions included wet, heavy and congested lungs likely due to respiratory paralysis consistent with PST [1]. Congestion of the of St. Lawrence (see Fig 2). The horizontal red lines indicate the level of toxicity considered hazardous for human consumption.
https://doi.org/10.1371/journal.pone.0176299.g003 Table 1 tracheal and oral mucosa was also observed. Some of the intoxicated grey seals and one beluga had blood-stained fur or skin on the head, possibly caused by irritation and motor incoordination due to PST [1] (Fig 4). One of the intoxicated beluga had superficial parallel cutaneous lacerations associated with haemorrhage in subcutaneous tissues on its flank highly suggestive of boat propeller trauma (S2 Fig). Animals paralysed due to PST may be more vulnerable to vessel collisions.
Various tissues from carcasses were tested for PST by Enzyme-Linked Immunosorbent Assay (ELISA), with selected samples analysed by LC-MS and pcox-FLD (Table 1, see also S1-S3 Tables). Analyses revealed levels of PST above detection limits (i.e., >2.2 μg 100 g -1 ) in the liver and/or gastrointestinal tract (GIT) contents (Table 1) as well as other tissues (S1-S3 Tables) in more than half of the 321 carcasses tested. An unidentifiable fish from the stomach of a Razorbill, which tested positive for saxitoxin in the liver (11 μg 100 g -1 ) and GIT tissues (64.1 to 71.2 μg 100 g -1 ), also tested positive for saxitoxin (60 μg 100 g -1 ), providing strong evidence for the trophic transfer of PST leading to mortality. Of 8 grey seal fetuses examined, 4 tested positive for PST. Liver and brain tissues of 6 of the corresponding 8 pregnant females tested positive for PST indicating transplacental transfer.
Limited diet data from marine birds and mammals in the SLE indicate that harbour seals, porpoises, razorbills, kittiwakes, gannets, and cormorants feed commonly on coastal planktivorous fish such as sand lance (Ammodytes sp.) and capelin (Mallotus villosus) [27][28][29][30][31]. Cod (Gadus morhua) and herring (Clupea harengus) are important prey for grey seals in August [32]. Beluga are generalists, feeding on sand lance, redfish (Sebastes spp.), capelin, herring and some benthic invertebrates [28]. Some rock crabs (Cancer irroratus) found dead are necrophagic. The presence of PST in live planktivorous and higher trophic level fish as well as in benthic invertebrates and zooplankton samples collected during the bloom (S4 and S5 Tables) provides direct evidence for the trophic transfer of PST leading to mortality. Higher mortalities among female vs. male grey seals, beluga and porpoises may indicate differences in prey preferences or foraging behaviour, seasonal geographic segregation of the sexes [33,34] or a differential response to biotoxins due to differences in body mass or physiology [35].
PST exists as a suite of over 21 related molecular forms that vary in toxicity, with saxitoxin (STX) being the parent form and one of the most toxic [1,11]. The precise mixture (profile) of toxins varies depending on the strain of Alexandrium spp. as well as on metabolic transformation by the consumer from lower to higher trophic levels and its state of degradation [36][37][38]. Toxin profiles derived from HPLC are shown in phytoplankton, sand lance (stomach and liver), Razorbill (stomach contents and GIT) and grey seal (GIT and liver), all collected during the bloom (Fig 5). The phytoplankton contained principally neosaxitoxin (NEO), N-sulfogonyautoxin-2 and -3 (C1 and C2), with smaller amounts of gonyautoxin-1 to -4 (GTX1-4) and STX. In sand lance, STX became relatively more abundant. In Razorbill the profile was characterized by NEO and STX with other toxins declining in importance. The profile in the unidentifiable fish from the stomach of the Razorbill resembled that of sand lance, while STX dominated in the Razorbill's GIT. In grey seal, the profile was composed entirely of STX. Some studies have reported similar shifts in profile after consumption of algae by shellfish, finfish and higher animals [36][37][38]. Transformation of C and GTX analogs to NEO and STX, as well as NEO to STX, have been reported and can be the result of chemical, bacterial and enzymatic actions [39][40][41][42]. There have been no studies, to our knowledge, of metabolism in mammalian species but it is reasonable to assume that transformations can continue as the toxins go up through higher trophic levels in the food web.
Additional evidence of exposure to PST comes from reports of apparently neurologically impaired fauna exhibiting unusual behaviour. These include paralysed and uncoordinated sand lance, observed by a DFO SCUBA diver at 2.4 m depth as the bloom drifted past Sainte-Flavie (Fig 2A, Site 4) on August 15; gulls, cormorants and eiders, unaware of their surroundings, unable to enter the water, to raise their head above the water, or to flee a helicopter hovering low above the beach; freshly wounded, sick or orphaned marine mammals, some exhibiting erratic behaviour, including 2 beluga, 1 minke whale (Balaenoptera acutorostrata), 1 grey seal and 2 unidentified seals.
Although it is difficult to demonstrate cause of death due to PST, the following evidence supports the conclusion of a multispecies mortality event due to a toxic A. tamarense bloom: a) PST detected in plankton samples; b) elevated PST levels in molluscs; c) the spatio-temporal occurrence of mortalities following the drift of the bloom, with d) no other cause of death identified pathologically and with PST detected in tissues from these animals; and e) signs of neurological dysfunction and acute death (good body condition, food in stomach) consistent with PST intoxication. Until now, reports of mass mortalities as a result of PST-producing algal blooms have often been anecdotal and many of these remain unpublished. The event reported here is the first well-documented case of mass mortality of multiple species of marine fauna resulting from an Alexandrium bloom. Such mortalities are expected to increase in the future as the frequency, intensity and geographic extent of toxic algal blooms are apparently increasing world-wide to climate change, coastal eutrophication and other environmental perturbations [43,44].
Ethics statement
Field permit: Phytoplankton data come from the long-term monitoring program of the Department of Fisheries and Oceans Canada (DFO). Live invertebrates and fish specimens were collected by DFO staff or with permission of this department. No permit is required in Canada to collect marine fauna carcasses on beaches or drifting, nor for necropsy of carcasses.
Animal research: Marine birds and mammals, including species designated at risk were examined at necropsy only after their natural death. Fig 4: The participant on this figure has given written informed consent (as outlined in PLOS consent form) to publish these case details.
Toxic algae count and identification
As part of the DFO toxic algae monitoring program, phytoplankton samples were routinely collected at 11 coastal sites on weekly basis from May to October 2008. Sea surface (<1 m) phytoplankton samples were collected with a Niskin bottle or a bucket and preserved with Lugol's iodine solution (1% final concentration). Subsamples (100 ml) were settled (Utermöhl technique) and toxic algae counted with an inverted microscope [45] by experienced taxonomists using [46] as the taxonomy guide. A. tamarense cell counts included in the present study are from samples (n = 137) collected between 23 May and 23 September 2008 at 6 selected monitoring sites (Tadoussac, Sainte-Flavie, Mont-Louis, Port-Daniel/Gascons, Baie-Comeau, Sept-Îles) (Fig 3). A. tamarense was the only toxic species present in quantity to explain this mass mortality.
Model simulation, atmospheric and meteorological data
The trajectory of the dinoflagellate bloom was calculated as follows. Daily forecasts of surface currents in the SLE are calculated daily and posted at http://slgo.ca/ocean/index.jsp?lg=en. These forecasts are calculated using the application of Saucier et al. [47][48][49], a 3-D circulation model on a 5 km grid resolution. The model is driven by freshwater run off, monthly mean value at Quebec City and from coastal rivers, tides at the straits of Belle-Isle and Cabot, and by atmospheric forcing: air temperature, wind intensity, dew point, cloud cover, precipitation and evaporation, provided by Environment Canada (http://www.weatheroffice.gc.ca/canada_ e.html). Since the top layer in the model is 5 m thick, wind induced surface currents are underestimated in the model. To reproduce surface trajectories of either an oil spill or a phytoplankton bloom, a surface velocity is added to the forecasted currents using 3% of wind intensity in the direction of the wind. The trajectory of the dinoflagellate bloom was calculated in the field of surface currents using a 4 th order Runge-Kutta interpolator. This trajectory was updated daily using either the end point of the previous simulation or from direct observation confirming the location of the bloom. Simulations were initiated at site 1 with a bloom size of 600 km 2 which was estimated from a helicopter survey (see below).
AOAC mouse bioassay
Data for saxitoxins in commercial shellfish are collected as part of ongoing monitoring efforts performed routinely by the CFIA. Over 100 sampling sites distributed along the coastline of the Estuary and Gulf of St. Lawrence are used to collect various species of bivalve shellfish. In 2008, shellfish samples were analyzed for PSP toxins using the standard mouse bioassay, and collection and toxin analytical methods were performed according to the AOAC protocol [50]. The results of mouse bioassays were converted to toxicity units: μg saxitoxin equivalents (STXeq) kg −1 wet weight of edible mollusc tissue [50]. Toxin data included in the present study are from samples (n = 718) collected between 23 May and 23 September 2008 at 8 selected monitoring sites (Fig 3), which were sampled on weekly basis prior, during, and following the red tide. Only the data coming from blue mussels Mytilus edulis and softshell clam (Mya arenaria) were considered in the present study although some other additional marine invertebrate species were also analysed during the red tide event by the AOAC mouse bioassay.
Helicopter surveys
Between August 13 and 16, 2008, a survey was conducted by helicopter equipped with bubble windows in order to quantify, locate and identify dead or moribund birds. Two experienced observers from CWS noted all sightings using PC-Mapper geo-referenced voice recording software (Corvallis Microtechnology, Corvallis, Oregon, USA). The actual survey time totalled 13.5 h and covered 1193 km around islands and coastal areas of the SLE, between Kamouraska and Sainte-Anne-des-Monts (south shore) and between Baie-Trinité and Baie-Saint-Paul (north shore). A second helicopter survey was also performed between August 14 and 15, 2008, in the same region in order to locate the bloom. This survey allows us to estimate visually the size of the bloom from discoloration of waters.
Necropsies and pathological analyses
Necropsies and pathological analyses were performed by or under the supervision of veterinary pathologists. Carcasses were examined either fresh or after one cycle of freezing and thawing. For marine mammals, bird and fish, the stage of decomposition of specimens was quantified by the code system established by Geraci and Lounsbury [51]. Thus, each carcass examined was assigned to one to five preservation categories determined by several characteristics such as described in [51]: CODE 1: Live Animals; CODE 2: Carcass in Good Condition (Fresh/Edible); CODE 3: Fair (Decomposed, but organs basically intact); CODE 4: Poor (Advanced decomposition); and CODE 5: Mummified or Skeletal Remains. Age of marine mammals was also determined using sectioned teeth [52]. At necropsy, nutritional states of specimens were visually evaluated [52]. Essentially, animals in good flesh with no evidence of muscular or fat depletion due to mobilization of protein or fat reserves (i.e., animals are not emaciated suggestive of starvation) were considered as in good nutritional condition. Gross lesions of dead organisms were also noted and multiple samples, including stomach contents when present, were collected. Tissues of major organs (lung, kidney, gonads, mammary gland, uterus) or suspected were processed for histopathological evaluation by light microscopy using standard laboratory procedures to detect any lesions or abnormal tissue [52]. Ancillary tests, including aerobic microbiological culture were conducted as needed based on histopathological findings in order to identify any pathogens (bacteria, virus, fungus, and parasite) present in tissues [52].
Collection of live invertebrates and fish
Specimens were obtained from the DFO research trawler Teleost and commercial fishing boats between August 8 and 27, 2008. Collections were made at depths ranging from 65 to 315 m between 49˚04.4' N, 67˚11.9' W and 48˚34.9' N, 68˚35.9' W. Zooplankton were collected using a 0.75 m ring net (202 μm mesh) towed vertically from bottom to surface at 1 m s -1 . Specimens were immediately frozen for later quantification of PST (see below).
PST assays
Tissues were assayed using ELISA for saxitoxin (Abraxis LLC, Warminster, PA, USA). Samples, standards and controls were processed according to the ELISA kit instructions. However, the extraction protocol was modified to facilitate further testing of selected samples via HPLC, and to prevent the hydrolytic interconversion of toxin congeners, by performing all extractions in 0.1 M acetic acid [53]. Concentrations were calculated against the standard curve response such as described in the ELISA kit instructions.
Instrumental analysis of PST composition and quantification
Quantitative measurements of toxin composition were performed by liquid chromatography with post-column oxidation and fluorescence detection (HPLC-pcox-FLD) [53], with additional confirmation by HPLC-MS/MS on an API 4000 Q-trap LC-MS (applied Biosystems) with Agilent 1200 HPLC using a triple quadrupole detector and ion-spray [54]. Toxin concentrations were converted [24] to toxicity units using specific toxicity conversion factors provided in Oshima [55]. Table. Concentrations of paralytic shellfish toxins (PST) in tissues of dead birds collected on beaches or drifting. Abbreviation definitions are given in S1 Table. (PDF) S3 Table. Concentrations of paralytic shellfish toxins (PST) in tissues of dead mammals collected on beaches or drifting. Abbreviation definitions are given in S1 Table. (PDF) S4 Table. Concentrations of paralytic shellfish toxins (PST) in tissues of live invertebrates. Abbreviation definitions are given in S1 Table. (PDF) S5 Table. Concentrations of paralytic shellfish toxins (PST) in tissues of live fish. Abbreviation definitions are given in S1 Table. (PDF) S1 File. Data (including metadata) collected in this study. | 6,235.6 | 2017-05-04T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
A study on the ultrasonic vibration on the cutting performance during ultrasonic-assisted drilling of CFRP with twist and dagger drill bits
During the drilling process of carbon fiber reinforced polymer (CFRP), defects such as exit delamination, tearing, and burrs are prone to occur, among which excessive axial force during drilling is the main reason for the generation of exit delamination defects. In this study, ultrasonic-assisted and conventional drilling with twist and dagger drills were investigated on 5 mm-thick multi-directional laminated fiber sheets made of T300-12k/AG80 carbon fiber composite with different orientations. The axial force evolutions in each case were monitored when drilling holes in CFRP plates, and the axial force of drilling and the appearance characteristics of hole outlet were compared. The experimental results showed that the ultrasonic assisted drilling reduced the axial drilling forces of twist and dagger drill by up to 20.6% and 30.7%, respectively, compared to conventional drills. In addition, as the feed rate increases, the axial force of drilling gradually increases, and the exit delamination factor shows an overall increasing trend. The ultrasonic vibrations reduced the exit delamination damage of both drills. A special structure of the dagger drill effectively avoided exit delamination during ultrasonic-assisted drilling, which makes the incision more neat to improve the processing quality.
Introduction
2][3] For example, in China, CFRP composites are extensively used in major projects such as large aircraft, space launch vehicles, and new weapons and equipment.[9][10] To address the various machining quality issues associated with CFRP materials during drilling, researchers from both domestic and international sources have conducted studies from multiple aspects, such as optimization of drilling parameters, matching of tool structures, and hole-making methods.2][13][14][15][16] However, excessively high drilling speeds can result in rapid tool wear and burn marks on the hole wall. 17Other researchers have focused on tooling and machining methods.For example, Fernandes et al. 18 reported that gun drills combined the advantages of low axial force, sharp cutting edges, and fast chip evacuation, which allowed them to quickly cut through fibers and reduce tearing and burrs at the entry and exit of the holes.Liu et al. 19 obtained the variation patterns of axial drilling forces with feed rate, rotational speed, and drill diameter through orthogonal and single-factor experiments.
Marques et al. 20 conducted experimental comparisons of different types of tools in the hole-making process, including drilling forces and resulting hole defects.The study found that step drills can significantly reduce drilling forces.Lazar et al. 21investigated the drilling of CFRP using dagger drills, eight-face drills, and ordinary double-sided drills.They found that the drill point geometry significantly influences drilling forces and torque, and eight-face drills showed the lowest drilling forces and torque.In contrast, ordinary double-sided drills showed the highest.Piquet et al. 22 conducted machining experiments on CFRP using standard twist drills and special drill heads.The results showed that the maximum damage radius generated during drilling with the special drill head was smaller, making it more suitable for CFRP machining.
][25] Shao et al. 26 conducted a study on ultrasonic vibration assisted twist drill in drilling CFRP/Ti materials and found a significant improvement in hole diameter accuracy and hole surface quality.In addition, tool wear conditions in UAD were also significantly alleviated.Li et al. 27 conducted experiments on titanium alloy by using rotary ultrasonic-assisted drilling (RUAD) with a new blade-type tool (eight-face drill), which significantly reduced drilling force, cutting temperature, and burr height compared to conventional drilling (CD).Sun et al. 28 used three different drill bits to drill CFRP, and the experimental results showed that the dagger drill was the best choice.By applying ultrasonic vibration, its axial force and surface roughness were significantly reduced.Cong et al. 29 studied the suitable range of processing parameters for dry ultrasonic-assisted machining of CFRP and demonstrated the feasibility of cold air cooling.Feng et al. 30 found that ultrasonicassisted vibration drilling can realize the separation of tool and chip, and compared with conventional drilling, the tearing factor at hole exit with ultrasonic-assisted twist drill was lower.Through experiments, Li et al. 31 proved that brittle fracture was the main material removal mechanism in CFRP grinding, and ultrasonicassisted grinding had a number of advantages.Sun et al. 32 found that ultrasonic vibration assisted milling of CFRP has more significant advantages in cutting force and cutting temperature, and surface defects are significantly suppressed.The surface roughness is also reduced, making it an efficient and low-damage machining method/strategy.Liu et al. 33 studied the cutting performance and surface integrity of chromium nickel iron alloy 718 using a ball end milling cutter through rotary ultrasonic elliptical milling.The experimental results showed that the cutting force was reduced by 31.33%under RUEM, and the tool side wear was significantly improved.Gu et al. 34 compiled a large number of articles on the influence of tool motion trajectory on surface quality formation in ultrasonic vibration machining.They classified ultrasonic vibration machining based on the form of tool motion trajectory, explored the influence of different processing parameters on tool motion trajectory, summarized the research on tool motion trajectory in ultrasonic vibration machining, and looked forward to future research.
Although some researchers have conducted studies on drilling CFRP, there are few studies on the ultrasonic-assisted dagger drilling.Therefore, in this study, twist and dagger drills were used to conduct hole-making experiments on CFRP on a computerized numerical control milling machine and an ultrasonic device for ultrasonic vibration-assisted cutting and conventional cutting.The axial force evolutions during conventional and ultrasonic-assisted drilling with these tools were studied.A comparison was made between the axial force and exit morphology during drilling, revealing the mechanism of tool-induced defect suppression under ultrasonic vibration.
Analysis of ultrasonic assisted drilling motion
Both Twist drills and dagger drills belong to drilling tools, which rely on the cross edge to drill into the workpiece, the cutting edge to cut the material, and the spiral groove to discharge the chips.The twist drills are most widely used in hole processing.Here, the twist drill is used to illustrate the motion characteristics of ultrasonic assisted drilling.As shown in Figure 1, the axial motion of the twist drill consists of the feed motion and ultrasonic vibration.u is the twist drill rotation angle; n is the rotational speed of the cutting edge at a certain point c; f z is the feed amount of the twist drill; and V f is the feed speed.f is the ultrasonic frequency and A is amplitude.
For conventional drilling (CD), the axial displacement Zc and rotation angle u at point c of the main cutting edge of a twist drill can be expressed as: Due to the effect of ultrasonic vibration, the twist drill axial displacement superimposed on the ultrasonic vibration in the ultrasonic-assisted drilling (UAD) process can be expressed as: Assuming that the distance from point c on the main cutting edge of the twist drill to the center of the circle is r, the coordinates of the ultrasound-assisted drilling CFRP at a given moment can be expressed as: If A = 0, equation ( 4) is the equation for the coordinates of point c on the main cutting edge for normal drilling of CFRP.
The standard twist drill has two main cutting edges (a-edge and b-edge), the phase difference between two points at the same radius of the two main cutting edges is p, and the axial displacement equations of its two edges can be expressed as: The parameters are set as twist drill radius R is 4 mm, spindle speed n is 2000 r/min, feed fz is 0.02 mm/r, amplitude A is 5 mm, frequency f is 20 kHz, and the cutting trajectory diagram of the cutting edge in conventional drilling and ultrasonic-assisted drilling is plotted using MATLAB software, as shown in Figure 2.
The conventional drilling (dotted line) and ultrasonic-assisted drilling (solid line) for twist drill main cutting edge are shown in Figure 2. The main cutting edge trajectory for conventional drilling is continuous and the trajectory spacing is equal due to the cutting thickness is equal, while the two main cutting edge trajectories for ultrasonic-assisted drilling of between the spacing is not equal, but shows a periodic change.The cutting edge and the workpiece experience a process of contact-impact-separation periodically for ultrasonic assisted drilling so that the continuous cutting movement of conventional drilling turn into intermittent cutting to reduce the cutting time to reduce the wear of the cutting edge.The instantaneous cutting thickness of the main cutting edge of the twist drill changes periodically with the rotation of the tool, reflecting the ultrasonic-assisted drilling with variable cutting thickness characteristics.
In addition, the cutting speed of the main cutting edge at point c in conventional drilling is synthesized by the circumferential speed V r and the axial feed speed V f at that point c.When ultrasonic vibration is applied in the axial direction, there is an ultrasonic vibration speed V u in the axial direction in addition to the feed speed, and the cutting speed of the main cutting edge and the transverse cutting edge are also changed accordingly.Derivation of equation ( 3) yields the axial feed rate for ultrasonic assisted drilling: The circumferential velocity at a point on the main cutting edge of a tool during ultrasound-assisted drilling: The closing speed of the main cutting edge during ultrasonic-assisted drilling at any instant of time: The spindle speed n is set to 2000 r/min, and the feed f z is set to 0.02 mm/r.The ultrasonic frequency f is set to 20 KHz, and the ultrasonic amplitude A is set to 5 mm.MATLAB software is used to draw the cutting speed graphs of the cutting edges in conventional drilling and ultrasonic-assisted drilling, and to compare and analyze the trend of the cutting speeds V of the main cutting edges and the traversing edges of the twist drills.The twist drill is mainly composed of a chisel edge and a main cutting edge.The chisel edge is located in the center of the bit, while the main cutting edge is located on the periphery of the bit, as shown in Figure 3.According to Formula 7, the cutting speed is positively related to the diameter of the bit, so the speed of the chisel edge is lower, which mainly plays the role of guidance and stability, while the speed of the main cutting edge is relatively high, which is responsible for the main cutting.Moreover, the chisel edge increases the axial force by scraping and extruding the materials.Therefore, if the axial force is too large, it may cause exit delamination defects when processing the carbon fiber composite material.
As shown in Figure 4 for the speed change trend of the cutting edge after applying ultrasonic vibration to the twist drill, the cutting speed is also increased with the excess of the cross edge to the main cutting edge (C!D).Compared with the cutting speed of conventional drilling, the cutting speed of ultrasonic-assisted drilling shows a periodic change, which in turn makes the cutting edge and the workpiece contact and separation.6] The periodic change of cutting speed also greatly improves the instantaneous cutting speed of the cutting edge, especially the transverse edge.It changes from pure scraping, extrusion excision of the material into the impact mode of cutting at a certain cutting speed, so that the transverse edge impact CFRP constantly.This mode improve the cutting conditions of the transverse edge and near the cutting edge to improve its sharpness. 28,37he overall reduction of cutting force can effectively reduce the delamination defects.
Test tools and plate material
The test tools used in the experiment were a twist and a dagger drill bits, as shown in Table .1, respectively.Both types of tools were provided by the Shanghai Tool Factory, China.The material of both tools was a hard alloy with a TiAlN coating with a diameter of 8 mm.The twist drill bit had a point angle of 140°and a helix angle of 30°.The structure comparison of dagger drill and Twists drill is shown in Figure 5.The test specimens used in this experiment were 5 mm-thick multi-directional laminated fiber sheets made of T300-12k/AG80, with specific parameters listed in Table 2.
Experimental platform and experimental procedure
The ultrasonic vibration system is mainly composed of two parts: ultrasonic generator and ultrasonic toolholder.The ultrasonic generator provides energy for ultrasonic toolholder and the function of the ultrasonic toolholder is to convert the high-frequency oscillation electrical energy to vibration mechanical energy, so that the tool can do high-frequency and large amplitude vibration for processing.The ultrasonic toolholder is made of the transmitting coil, the receiving coil, the transducer, the horn, and the tool, as shown in Figure 6.The transmitting coil and the receiving coil form a wireless transmission system.The ultrasonic vibration system used in this experiment consisted of an ultrasonic transducer and a longitudinal-torsional amplitude bar, combined with a VMC-850E vertical machining center and a KISTLER 9257B three-component dynamometer with a NI9025 data acquisition card for UAD experiments.The energy transmission of the ultrasonic vibration system was noncontact wireless, and normal drilling was achieved by turning off the ultrasonic power.The overall drilling setup is shown in Figure 7.A KEYENCE LK-G5000 noncontact laser measurement system from Japan was used to measure the amplitudes of the twist and dagger drills.This system mainly consisted of an LK-G5000 series laser controller unit, a laser sensor head, LK-Navigator 2 operation software, and a PC terminal.The measurement method is shown in Figure 8.
This experiment was mainly aimed at comparing the axial drilling forces and exit surface morphology of twist drills and dagger drills during conventional drilling (CD) and ultrasonic-assisted drilling (UAD) of CFRP, as well as their influence on the drilled hole quality.According to the relevant literature, feed rate has a greater impact on hole quality than spindle speed.Four different feed rates were selected as experimental processing parameters to ensure comprehensive testing, as listed in Table 3.
Analysis of axial force variation with time
Analysis of the axial drilling forces of twist drills over time.The changes in the axial drilling forces in the twist and dagger drills under study, at a feed rate of 75 mm/min and a spindle speed of 3000 r/min were analyzed, as shown in Figure 9.The characteristics of the unfiltered axial drilling force signals of twist drills during CD and UAD of CFRP were monitored over time.In the process of drilling CFRP, the cutting forces in the X and Y directions of the vertical axial force were relatively small and could be neglected.
From Figure 9, it can be observed that the waveforms of drilling forces for conventional drilling and UAD were essentially the same.
Segment A-B: The axial drilling force rapidly increased as the helical edge of the twist drill get into contact with the material, reaching its maximum as the entire main cutting edge penetrated.It can be seen from the graph that the slope of the UAD axial force curve was smaller than that of the conventional drilling axial force curve.Segment B-C: In this stage, the main cutting edge was fully immersed in the material, representing the stable stage of the drilling process.It can be observed from the graph that the axial drilling force for UAD was smaller than that of conventional drilling.Due to the upward helix angle of the twist drill, the axial force tended to decrease.The average value of the force during this stage was taken as the axial drilling force.
Segment C-D: As the twist drill exited the material, the axial drilling force rapidly dropped until it reached zero.
Analysis of axial drilling forces of dagger drills over time.The processing method of the axial drilling force for dagger drills was the same as that of twist drills.The trend of the axial drilling force for dagger drills can be divided into five stages over time, as shown in Figure 10.
Segment A-B: When the flutes of the dagger drill first came into contact with the CFRP material and compressed the workpiece, the UAD axial force increased rapidly from 0 to 9 N, while that of conventional drilling (CD) reached 15 N.Then, the first main cutting edge started cutting the material, and the axial force increased rapidly to its maximum, indicating that the first main cutting edge has completely penetrated the material.
Segment B-C: The second main cutting edge started to participate in cutting, entering the hole enlargement stage.The axial force became relatively stable as the rake angle of the second main cutting edge was much smaller than that of the first cutting edge, resulting in a smaller cutting force.At this stage, the axial force of drilling was mainly influenced by the first main cutting edge.The average value of the axial force in the segment B-C was taken as the axial drilling force under this processing parameter for the gun drill.
Segment C-D: When the dagger drill reached the bottom layer of the CFRP material with the flutes, the bottom layer fibers' strength was insufficient to resist the axial force, resulting in an increased deformation.As the first main cutting edge drilled out, the raised area expanded outward, increasing the delamination zone.Point C was the transition point where the first main cutting edge started to drill out of the material, and at this point, the axial force also decreased rapidly.
Segment D-E: At the point D, the first main cutting edge has been completely drilled out, and the second main cutting edge started executing hole enlargement, so the axial force of drilling slightly decreased.But there was an abrupt change in the axial force near point ''E,'' mainly caused by flutter due to uneven transition between the second main cutting edge and the reaming edge during the drilling.
Segment E-F: The axial force of drilling continued to decrease slowly.At this stage, both hole enlargement and reaming occurred simultaneously.As the second main cutting edge drilled out, the axial force dropped to zero.After point F, it entered the stage of complete reaming holes, and the hole wall and exit were further refined until the drilling was completed.
Comparative analysis of axial forces in CD and UAD
Since each cutting tool was subjected to eight repeated experiments at each machining parameter, the average value of the axial force was calculated from the eight experimental results.As shown in Figures 11 and 12, the axial forces of both types of cutting tools increased with feed rate.This was because as the feed rate increased, the volume of material removed by the drill bit increased with each revolution.The resistance that the cutting edge had to overcome to cut the fibers also increased, resulting in a significant increase in axial force.However, the increase in axial force for the twist drill became slower at feed rates V f exceeding 75 mm/ min, mainly because the twist drill had a helical structure, and the upward helical force also increased with the feed rate.
It can be seen from the Figures 11 and 12 that the axial force of ultrasonic-assisted drilling of both tools is smaller than that of conventional drilling, and the axial force reduction of dagger drilling is the most obvious, ranging from 23.3% to 30.7%, and that of twist drilling is from 16.5% to 20.6%.This is due to the ultrasonic vibration changes the mode of action between the drill and the workpiece, from continuous cutting to intermittent cutting, slowing down the wear of the tool.While the carbon fiber belongs to the hard and brittle materials, the vibration of the high frequency makes impact force of the cross cutting edge and the main cutting edge more stronger, improving its cutting ability.Therefore, the axial force of ultrasonic-assisted drilling is lower than that of conventional drilling.
From the fluctuation range of the error bars, it can be observed that for CD and UAD modes with the two types of tools, the size of the error bars increased with the drilling speed.This was mainly due to increased drilling axial force with drilling speed.The reduced rigidity of the tool led to instability during drilling.When comparing the error bars of the two types of tools, it can be found that the fluctuation range of the axial drilling force for the twist drill was much smaller than that of the dagger drill, indicating that the twist drill was more stable and had smaller force fluctuations during drilling.
From the comparison of the reduction rate curves of the axial force for the twist drill and the gun drill in Figures 11 and 12, it can be seen that the reduction rate of axial force for the twist drill showed a slight decrease with feed rate, while that of the dagger drill initially decreased and then slightly increased.At a feed rate V f = 75 mm/min, the latter's reduction rate reached a minimum of 23.3%, but it still exceeded the maximum reduction rate of the twist drill.Therefore, the dagger drill was more suitable for UAD of carbon fiber composites.
Comparison and analysis of the axial force between the twist drill and dagger drill
As shown in Figure 13, the average values of axial force in the stabilization stage of conventional and ultrasonic-assisted drilling were taken respectively.The axial force of drilling of the two types of tools showed an increasing trend with the increase of the feed rate.By comparing the drilling axial force of the two tools, it is found that under four different feed parameters, no matter it is conventional drilling or ultrasonic drilling, the drilling axial force of dagger drill is smaller than that of twist drill.Especially when the feed speed V f = 75 mm/min, the axial force of dagger drill is reduced by 22.4 and 23.4 N under conventional and ultrasonic-assisted drilling, respectively.This is mainly because that the special structure of the double apex angle makes the cutting thickness of the second main cutting edge decrease during the drilling process, and the corresponding axial force decreases.Therefore, the dagger drill is more suitable for drilling carbon fiber composites.12.Comparison of axial force between conventional and ultrasonic drilling of dagger drill.
From Figure 13, it can be observed that the axial force of the twist drill during UAD was not significantly different from that of the dagger drill during conventional drilling.However, when the dagger drill was combined with ultrasonic vibration, its axial force during drilling was much smaller that of the twist drill during conventional drilling.It was the smallest among the four drilling conditions.Since the dagger drill combined drilling, enlarging, and reaming in one tool, the cutting thickness during drilling was small, resulting in relatively small axial forces at each drilling stage.Therefore, the overall axial force during drilling was small.As the axial force was the main cause of exit delamination, the exit delamination damage caused by the dagger drill was lower than that caused by the twist drill.When ultrasonic vibration was applied, the dagger drill's first and second main cutting edges obtained a higher impact velocity, which was beneficial for cutting off the fiber bundles, resulting in a higher-quality exit morphology.
Analysis of exit morphology in CD and UAD with two types of tools
Defects in carbon fiber composite material drilling mainly occur at the exit end.The main reason is that the outermost few layers of fibers at the exit end are subjected to low binding force from the matrix and have low load-bearing capacity.If the axial force during drilling is too large, it can cause cutting stresses to exceed the material's ultimate strength, resulting in exit delamination, splitting, and other defects.When the tool's cutting edge becomes dull, burrs and other defects are easily generated.Among these defects, exit delamination has the greatest impact on the assembly of carbon fiber composite material plates.According to incomplete statistics, more than 60% of the nonconforming rate in aircraft assembly is caused by exit delamination defects during CFRP drilling.Therefore, exit delamination defects are the main focus of analysis. 38,39urrently, the evaluation of exit delamination defects is usually expressed using the area ratio method in Figure 14.A max represents the maximum area of exit delamination damage (expressed as the area of a circle with a diameter of D max ), and A nom represents the nominal area of the hole (expressed as the area of a circle with a diameter of D nom ).The delamination factor is calculated as the ratio of A max to A nom .
As shown in Figure 15, under the same machining parameters, the exit delamination area of CFRP caused by helical drilling with ultrasonic assistance is smaller than that of conventional drilling.This is because the axial force is the main factor affecting delamination.The helical drill's transverse cutting edge performs negative rake cutting during the machining process, which relies on extrusion and twisting friction to achieve fiber fracture, resulting in a large axial force.If the main cutting edge cannot cut the fiber bundle in time, it will cause delamination and burrs along the direction of weak interlayer bonding strength.However, when ultrasonic vibration is applied, the transverse and main cutting edge obtain a large impact velocity.Therefore, the transverse cutting edge no longer solely relies on extrusion and twisting friction to cut the fibers but systematically cleaves the carbon fiber bundles, which is more conducive to fiber bundle cutting.Dagger drilling technique is a combination of drilling, expanding, reaming.As shown in Figure 16, when the rotational speed is 3000 mm/min and the feed speed is 75 mm/min, the exit delamination area is much smaller than that of conventional drilling, which is similar to twist drilling with the law of making holes.
The change relationship between exit delamination factor and feed speed for two tools in two machining modes is shown in Figure 17.It can be seen that the exit delamination factor with the increase of feed speed also shows an increasing trend on the whole.Under ultrasonic assisted drilling, the maximum reduction of delamination factor at the exit of twists drill hole is 21.8%, and that of dagger drill hole is 13.6%.In addition, the size of the ultrasonic-assisted drilling delamination factor of the twist drill and the size of the exit delamination factor of the dagger drill ordinary drilling are not much different.It can also be seen from it that the exit delamination factor of the twist drill is the largest in conventional drilling, while the dagger drill is the smallest in ultrasonic-assisted drilling, which is mainly due to the drilling-expanding-reaming composite drilling structure (the first top angle of 102°, the second top angle 20°) makes the axial force of drilling smaller than that of twist drilling.It can be seen from Figures 15 to 17 that the exit burrs of ultrasonic-assisted drilling hole are less than that of conventional drilling and the delamination area is the smallest.This is mainly because that the ultrasonic-assisted drilling changes the cutting angle of the tool, which is more conducive to cutting off the fibers, reducing the generation of defects in the hole making.
Conclusions
The experimental results of ultrasonic-assisted and conventional drilling of carbon fiber-reinforced polymer (CFRP) composites using twist (helical) and dagger drills were analyzed regarding the evolution of axial drilling forces.The following conclusions were drawn: (1) Under the action of ultrasonic vibration, the cutting edge trajectory of the tool changes, achieving periodic intermittent cutting, reducing the contact time between the drill and workpiece, and reducing tool wear.In addition, the ultrasonic vibration improves the cutting condition of the chisel edge, that makes it change to play an important role in drilling and impacting with a certain cutting speed from extruding and scraping cutting.At the same time, it also increases the instantaneous cutting speed of the main cutting edge, which is more conducive to cutting off carbon fibers and reducing axial force.(2) The time-varying curve of axial force of twist drill and dagger drill is analyzed.In the stable stage of drilling, the axial force of ultrasonic assisted drilling is significantly lower than that of ordinary drilling.Due to the special geometric structure of dagger drill, the change of its drilling force is more complicated.twists drill is between 16.5% and 20.6%, and the dagger drill showed the most significant reduction ranging from 23.3% to 30.7%.The fluctuation range of axial force error bar of twist drill is much smaller than that of dagger drill, so twist drill is more stable in the drilling process.( 4) With increased feed rate, the axial forces in both types of tools gradually increased, and the exit delamination factor also shows an overall increasing trend.Compared with conventional drilling, the maximum reduction of delamination factor at the exit of twists drill hole is 21.8%, and that of dagger drill hole is 13.6% under ultrasonic assisted drilling.Among them, the delamination factor of ultrasonic assisted drilling in dagger drilling is the smallest.(5) The exit morphology quality of UAD for CFRP was superior to that of conventional drilling for both types of tools.The dagger drill benefited from its composite drilling-countersinking-reaming structure, effectively avoiding exit delamination and resulting in neater cutting edges.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest respect to research, authorship, and/or publication of this article.
Figure 2 .
Figure 2. Trajectory diagram of the cutting edge.
Figure 3 .
Figure 3. Speed of the cutting edge in conventional drilling.
Figure 4 .
Figure 4. Speed of the cutting edge in ultrasonic-assisted drilling.
Figure 5 .
Figure 5. Structure comparison diagram of dagger drill and Twists drill.
Figure 8 .
Figure 8. Vibration of drill bits measurement equipment.
Figure 9 .
Figure 9. Axial force evolution of the twist drill.Figure10.Curve of the axial force of the dagger drill with time.
Figure 10 .
Figure 9. Axial force evolution of the twist drill.Figure10.Curve of the axial force of the dagger drill with time.
Figure 11 .
Figure 11.Comparison of axial force between conventional and ultrasonic drilling of twist drill.12.Comparison of axial force between conventional and ultrasonic drilling of dagger drill.
Figure 13 .
Figure 13.Comparison of axial force between conventional and ultrasonic drilling of two tools.
( 3 )Figure 16 .
Figure 16.Comparison of the outlet morphology of the dagger drill bit.
Figure 17 .
Figure 17. of exit factors at different feed speeds.
Table 1 .
Specific parameters of the drill bits.
Table 2 .
Performance parameters of the fiberboard. | 6,966.4 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Performance Evaluation of Group Sparse Reconstruction and Total Variation Minimization for Target Imaging in Stratified Subsurface Media
: Sparse reconstruction methods have been successfully applied for e ffi cient radar imaging of targets embedded in stratified dielectric subsurface media. Recently, a total variation minimization (TVM) based approach was shown to provide superior image reconstruction performance over standard L1-norm minimization-based method, especially in case of non-point-like targets. Alternatively, group sparse reconstruction (GSR) schemes can also be employed to account for embedded target extent. In this paper, we provide qualitative and quantitative performance evaluations of TVM and GSR schemes for e ffi cient and reliable target imaging in stratified subsurface media. Using numerical electromagnetic data of targets buried in the ground, we demonstrate that GSR and TVM provide comparable reconstruction performance qualitatively, with GSR exhibiting a slight superiority over TVM quantitatively, albeit at the expense of less flexibility in regularization parameters.
Recently, a generalized sparse image reconstruction approach with total variation minimization (TVM) was proposed for efficient and reliable radar imaging through multilayered background media [4]. More specifically, the multilayered subsurface Green's function was incorporated in the imaging algorithm to model the wave propagation effects in the multilayered environment and was efficiently evaluated using the saddle point method. As compared to standard l 1 -norm minimization-based techniques [20,22], which are based on point target model, the TVM-based approach minimizes the gradient of the image, thereby leading to better edge preservation and, in turn, reconstruction of non-point-like and extended targets [4,21].
An alternative to TVM based approach is group sparse reconstruction (GSR), which can also account for the target extent [21,23]. In high-resolution images, each extended or non-point-like target generally
Sparsity-Based Image Formation through Stratified Subsurface Media
In this section, we first describe the signal model for imaging through stratified subsurface media with a co-located MIMO radar. Then, we briefly review the TVM and GSR techniques for image reconstruction.
Signal Model in Matrix Form
We consider an N-element transmit array and an M-element receive array, with r tn = (x tn , z tn ) and r rm = (x rm , z rm ) denoting the respective position vectors of the nth transmitter and mth receiver. A stepped-frequency signal, with P frequencies uniformly covering the frequency band [ f min , f max ], is used for imaging. The transmitters are assumed to be activated sequentially, while simultaneous reception at all receivers is assumed. We focus on the four-layered background media, shown in Figure 1, where the first layer is air and the remaining three are subsurface layers. The dielectric constant and conductivity of the subsurface layers are assumed to be (ε r2 , σ 2 ), (ε r3 , σ 3 ), and (ε r4 , σ 4 ), while the second and third layers have a thickness of d 2 and d 3 , respectively. Although the presented formulation considers only three subsurface layers, it can be readily generalized to an arbitrary number of subsurface layers. generally occupies a contiguous group of pixels rather than a single pixel. As such, the point-targetbased sparse signal model can be refined to exclude from the solution space any image whose support contains isolated pixel indices [22]. The group sparsity approach incorporates the clustering of the non-zero image pixels into a small number of contiguous groups as a constraint in the sparse reconstruction problem. The specific clustering pattern can be structured to match the desired target shape.
In this paper, we provide a performance evaluation of TVM and GSR schemes for non-point-like target imaging in stratified subsurface media. To this end, we consider a multiple-input multiple output (MIMO) radar system and use numerical electromagnetic data of targets buried in a fourlayered background environment. We show that the two methods provide comparable performance qualitatively under noisy measurements. Quantitatively, GSR exhibits a slight superiority in terms of image reconstruction, especially at low signal-to-noise ratio (SNR) values. However, this GSR quantitative performance is achieved at the expense of less flexibility in setting of regularization parameters.
The remainder of the paper is organized as follows. Section 2 provides a review of the Green's function formulation for modeling the wave propagation effects in the multilayered background media and details the TVM and GSR based imaging algorithms. Section 3 describes the considered metrics for quantitative assessment and provides performance comparison using two sets of image reconstruction results of targets embedded in stratified subsurface media. Concluding remarks are provided in Section 4.
Sparsity-Based Image Formation through Stratified Subsurface Media
In this section, we first describe the signal model for imaging through stratified subsurface media with a co-located MIMO radar. Then, we briefly review the TVM and GSR techniques for image reconstruction.
Signal Model in Matrix Form
We consider an N-element transmit array and an M-element receive array, with , and , denoting the respective position vectors of the th transmitter and th receiver. A stepped-frequency signal, with frequencies uniformly covering the frequency band , , is used for imaging. The transmitters are assumed to be activated sequentially, while simultaneous reception at all receivers is assumed. We focus on the four-layered background media, shown in Figure 1, where the first layer is air and the remaining three are subsurface layers. The dielectric constant and conductivity of the subsurface layers are assumed to be , , , , and , , while the second and third layers have a thickness of and , respectively. Although the presented formulation considers only three subsurface layers, it can be readily generalized to an arbitrary number of subsurface layers. Assuming a point target embedded in the fourth layer, the received scattered field, , , , at the mth receiver with the nth transmitter active can be expressed as Figure 1. Radar imaging through stratified subsurface media [21]. Assuming a point target embedded in the fourth layer, the received scattered field, E s (r rm , r tn , k p ), at the mth receiver with the nth transmitter active can be expressed as E s (r rm , r tn , k p ) = G(r rm , r, k p )G(r, r tn , k p )σ(r)dr, (1) where σ(r) is the scene reflectivity at position r = (x, z), k p is the free-space wave number of the pth frequency, and G(r rm , r, k p ) and G(r, r tn , k p ) are the layered media Green's functions characterizing wave propagation from the transmitter to the target and from the target to the receiver, respectively. We note that Equation (1) essentially uses the first-order Born approximation, which ignores the multiple scattering effects. We discretize the region being imaged in the xz-plane into K × L pixels, and represent the corresponding scene reflectivity by the K × L matrix s. Then, the mth received signal in Equation (1) can be expressed in matrix form as where vec(·) returns the column-wise vectorization of its matrix argument, the pth element of y m,n is [ y m,n ] p = E s (r rm , r tn , k p ), and Ψ m,n is a P × KL dictionary matrix encompassing the stratified media effects, with its (p, q)th element given by The layered media Green's function for the GPR imaging configuration in Figure 1 can be expressed in closed-form using the Saddle Point Method (SPM) as [4] G(r R , r, k p ) ≈ jk 2 where r R = (x R , z R ), k 1 is the wavenumber of the air layer, α is a real-valued scaling variable, and with where y = [ y T 1,1 , y T 1,2 , . . . The signal model in Equation (8) corresponds to the full data measurements, comprising all P frequencies from all N transmitters and M receivers. In many practical operational scenarios, there are often cost constraints, which may limit the number of transmitters and receivers available for deployment. Note that reducing the number of frequencies at which measurements are made over the desired bandwidth may not translate into cost reduction. This is because the antennas and radio frequency (RF) front end would still be required to operate over the entire frequency band. As such, we retain the use of all P frequencies and assume that N t < N transmitters and M r < M receivers are available for data collection. Under these constraints, the model in Equation (8) takes the form y = ΛΨs = Θs. (9) Here, Λ is an M r N t P × NMP measurement matrix given by [24] where ' ' denotes the Kronecker product, I (·) is an identity matrix with the subscript indicating its dimensions, Φ is an M r × M matrix constructed by randomly selecting M r rows of I N t P , and ϑ is an N t × N matrix consisting of N t randomly selected rows of I MP .
Total Variation Minimization
Using the reduced measurements in Equation (9), the unknown scene reflectivity vector s can be recovered by solving the TVM problem [4], where δ represents a small tolerance error, In this work, we use the Nesterov algorithm in the NESTA package to solve the TVM problem in Equation (11) [25]. This algorithm utilizes a regularization scheme together with a smoothed version of l 1 -norm to achieve the solution of the underlying convex optimization problem.
Group Sparse Reconstruction
In high-resolution imaging, targets generally occupy a group of neighboring pixels whose extent depends not only on the target dimension, but also on the system resolution. This prior pixel neighborhood information can be incorporated in the image reconstruction problem using group sparsity constraints. More specifically, the scene reflectivity vector s can be obtained by solving the convex optimization problem [23,26,27] where g q ⊆ {0, 1, . . . , KL − 1} is an index set corresponding to the group of pixels forming a neighborhood around the qth pixel and the diagonal weighting matrix W (q) ensures that the weighting within a group is according to the desired pixel neighborhood relation. Figure 2 provides an example of how the grouping of the image pixels works for a 10 × 10 image [23]. The number in the top left corner of each square indicates the pixel index, whereas the pixel weight of the depicted group is represented by the number in the center of each pixel. The weights are chosen such that their sum equals unity to avoid unintentional scaling of the reconstruction result. As shown in Figure 2, the index set for the group corresponding to the 12th pixel is g 12 = {2, 11, 12, 13, 22}, with the corresponding weighting matrix W (12) = diag 1 8 , 1 8 , 1 2 , 1 8 , 1 8 . In this paper, we solve the reconstruction problem in Equation (13) using Primal YALL1 group [27] and utilize overlapping groups. neighborhood around the th pixel and the diagonal weighting matrix ensures that the weighting within a group is according to the desired pixel neighborhood relation. Figure 2 provides an example of how the grouping of the image pixels works for a 10 10 image [23]. The number in the top left corner of each square indicates the pixel index, whereas the pixel weight of the depicted group is represented by the number in the center of each pixel. The weights are chosen such that their sum equals unity to avoid unintentional scaling of the reconstruction result. As shown in Figure 2, the index set for the group corresponding to the 12th pixel is 2, 11, 12, 13, 22 , with the corresponding weighting matrix diag , , , , . In this paper, we solve the reconstruction problem in Equation (13) using Primal YALL1 group [27] and utilize overlapping groups.
Performance Evaluation Results and Discussion
In this section, we first describe the metrics considered for quantitative performance evaluation and then present the image reconstruction assessment results of targets embedded in a stratified subsurface.
Performance Evaluation Results and Discussion
In this section, we first describe the metrics considered for quantitative performance evaluation and then present the image reconstruction assessment results of targets embedded in a stratified subsurface.
Quantitative Metrics
We consider two different metrics, namely, relative clutter power (RCP) and Earth mover's distance (EMD), for quantitative performance evaluation of the TVM and GSR schemes.
Relative Clutter Peak
We define the target region, R t , as the union of rectangular or circular regions at known target positions, whereas the remainder of the image comprises the clutter region, R c ,. The size of each individual target region is determined based on the ground truth and the system resolution. With A R t ,ŝ and A R c ,ŝ denoting the respective maximum amplitude of target and clutter regions in the reconstructed imageŝ, the RCP is defined as [23] RCP in dB = 20 log 10 whereŝ q is the qth element ofŝ. The RCP metric penalizes strong clutter and favors clean images with low noise and clutter power and high target amplitudes.
Earth Mover's Distance
Earth mover's distance is defined as the minimal amount of image intensity that has to be moved to transform one image into another [28,29]. For the underlying application, we measure the EMD between the reconstructed image and the ground truth image. This metric incorporates perceptual differences between the reconstructed and ground truth images, as it measures error in terms of not only the differences in pixel values, but also physical distance away from the actual target locations. It, therefore, is a preferred metric over mean-squared error in sparse reconstruction literature [30]. In this work, we use a fast implementation of EMD [31].
Performance Comparison
We consider a MIMO radar system with 17 uniformly spaced transmitters from −0.96 to 0.96 m and 16 receivers equally spaced from −0.9 to 0.9 m. Both arrays are at a height of 0.2 m above the ground. The stepped-frequency signal covers the 0.8 to 2 GHz bandwidth with P = 49 frequency steps. A time-domain full wave electromagnetic solver based on Finite-Difference Time-Domain (FDTD) method is used for generating the received signals from two different scenes. Fast Fourier Transform (FFT) is applied to transform the time-domain received signals to frequency domain. The radar measurement configuration and signal parameters are the same in both scenarios. White Gaussian noise is added to the frequency-domain data. For image reconstruction, the thickness and complex permittivity of each layer of the stratified subsurface media are assumed to be known a priori. In practice, however, these parameters can be estimated using an inversion scheme. The inversion of multilayered medium parameters has been well developed within the framework of one-dimensional inverse scattering in the past two decades [32][33][34][35]. For the conventional single layer subsurface, analytical methods for estimation of the dielectric slab parameters have been provided in [32,35]. For multilayered subsurface media, the number and parameters of each layer can be efficiently retrieved using a layer-stripping algorithm or global optimization inverse scattering techniques [33,34].
Example 1
In this example, we consider three metallic targets (two rectangular and one cylindrical) embedded in a three-layered background environment, as shown in Figure 3. The dielectric constant, conductivity, and thickness of the second layer are ε r2 = 6, σ 2 = 0.01 S/m, and d 2 = 0.2 m, respectively. The third layer with a dielectric constant ε r3 = 3 and conductivity σ 3 = 0.005 S/m contains the three targets. The target dimensions are specified in Table 1. We randomly select two transmitters (12% of the available quantity) and six receivers (38% of the available number). For each chosen transmitter-receiver pair, we utilize all 49 frequency measurements to reconstruct the image. Figures 4 and 5 depict the images obtained using TVM, GSR, and standard l 1 -norm sparse reconstruction [20] for −10 and −5 dB SNR values, respectively. We observe from Figures 4 and 5 that although there are a few false reconstructions in both TVM and GSR results, the two approaches provide cleaner images of the targets as compared to the standard l 1 -norm sparse reconstruction result. Next, we quantitatively evaluate the performance of the TVR, GSR, and standard -norm reconstruction schemes. We consider SNR values in the [−10 10] dB range with 5 dB increments. We perform a total of 100 Monte Carlo trials for each SNR value with different realization of noise and different randomly chosen sets of two transmitters and six receivers each time. For every trial, we reconstruct the image using TVM, GSR, and standard -norm schemes and compute the corresponding values of the metrics. Figure 6 plots the RCP and the EMD, each averaged over 100 trials, versus SNR. The variance of the EMD for each SNR is also indicated in Figure 5. We observe that GSR and TVM provide almost identical RCP performance, while significantly outperforming standard -norm reconstruction for all SNR values. In terms of EMD at low SNR, GSR provides the best performance as manifested by the lowest EMD average and variance values, while standardnorm reconstruction has the worst performance with TVM in the middle. At higher SNR values, the average EMD values for all three reconstruction methods are comparable. However, both GSR and TVM yield a smaller variance for the EMD as compared to the standard -norm reconstruction. Similar trends were observed for reconstruction performance with a random selection of four transmitters and four receivers and a random selection of four transmitters and eight receivers.
Example 2
In this example, we consider a large composite metallic target consisting of two semi-cylinders on top of a rectangle cylinder, which is embedded in the third layer of a three-layer background as shown in Figure 7. The target dimensions are also specified in Figure 7. The physical and electrical properties of the three background layers are the same as in Example 1. Figures 8 and 9 depict the reconstruction results obtained with two randomly chosen transmitters and six randomly chosen receivers using TVM, GSR, and standard -norm sparse reconstruction for SNR values of −5 and 0 dB, respectively. Similar to Example 1, we observe that both TVM and GSR approaches yield superior quality images as compared to the standard -norm sparse reconstruction. Next, we quantitatively evaluate the performance of the TVR, GSR, and standard l 1 -norm reconstruction schemes. We consider SNR values in the [−10 10] dB range with 5 dB increments. We perform a total of 100 Monte Carlo trials for each SNR value with different realization of noise and different randomly chosen sets of two transmitters and six receivers each time. For every trial, we reconstruct the image using TVM, GSR, and standard l 1 -norm schemes and compute the corresponding values of the metrics. Figure 6 plots the RCP and the EMD, each averaged over 100 trials, versus SNR. The variance of the EMD for each SNR is also indicated in Figure 5. We observe that GSR and TVM provide almost identical RCP performance, while significantly outperforming standard l 1 -norm reconstruction for all SNR values. In terms of EMD at low SNR, GSR provides the best performance as manifested by the lowest EMD average and variance values, while standard l 1 -norm reconstruction has the worst performance with TVM in the middle. At higher SNR values, the average EMD values for all three reconstruction methods are comparable. However, both GSR and TVM yield a smaller variance for the EMD as compared to the standard l 1 -norm reconstruction. Similar trends were observed for reconstruction performance with a random selection of four transmitters and four receivers and a random selection of four transmitters and eight receivers. Next, we quantitatively evaluate the performance of the TVR, GSR, and standard -norm reconstruction schemes. We consider SNR values in the [−10 10] dB range with 5 dB increments. We perform a total of 100 Monte Carlo trials for each SNR value with different realization of noise and different randomly chosen sets of two transmitters and six receivers each time. For every trial, we reconstruct the image using TVM, GSR, and standard -norm schemes and compute the corresponding values of the metrics. Figure 6 plots the RCP and the EMD, each averaged over 100 trials, versus SNR. The variance of the EMD for each SNR is also indicated in Figure 5. We observe that GSR and TVM provide almost identical RCP performance, while significantly outperforming standard -norm reconstruction for all SNR values. In terms of EMD at low SNR, GSR provides the best performance as manifested by the lowest EMD average and variance values, while standardnorm reconstruction has the worst performance with TVM in the middle. At higher SNR values, the average EMD values for all three reconstruction methods are comparable. However, both GSR and TVM yield a smaller variance for the EMD as compared to the standard -norm reconstruction. Similar trends were observed for reconstruction performance with a random selection of four transmitters and four receivers and a random selection of four transmitters and eight receivers.
Example 2
In this example, we consider a large composite metallic target consisting of two semi-cylinders on top of a rectangle cylinder, which is embedded in the third layer of a three-layer background as shown in Figure 7. The target dimensions are also specified in Figure 7. The physical and electrical properties of the three background layers are the same as in Example 1. Figures 8 and 9 depict the reconstruction results obtained with two randomly chosen transmitters and six randomly chosen receivers using TVM, GSR, and standard -norm sparse reconstruction for SNR values of −5 and 0
Example 2
In this example, we consider a large composite metallic target consisting of two semi-cylinders on top of a rectangle cylinder, which is embedded in the third layer of a three-layer background as shown in Figure 7. The target dimensions are also specified in Figure 7. The physical and electrical properties of the three background layers are the same as in Example 1. Figures 8 and 9 depict the reconstruction results obtained with two randomly chosen transmitters and six randomly chosen receivers using TVM, GSR, and standard l 1 -norm sparse reconstruction for SNR values of −5 and 0 dB, respectively. Similar to Example 1, we observe that both TVM and GSR approaches yield superior quality images as compared to the standard l 1 -norm sparse reconstruction. The two quantitative performance metrics, averaged over 100 Monte Carlo trials, are plotted vs. SNR in Figure 10 for TVR, GSR, and standard l 1 -norm reconstruction schemes. For EMD, the variance is also indicated in Figure 10. Again, we observe that the performance of GSR and TVM is comparable in terms of RCP, while that of standard l 1 -norm reconstruction is significantly lower. Unlike the three-target scene in Example 1 wherein TVM performance in terms of average EMD was approximately half-way between that of GSR and standard l 1 -norm reconstruction, the EMD curve for TVM in the larger composite target case closely follows the corresponding curve for GSR. TVM's capability of edge preservation better manifests itself in case of the composite target, leading to smaller performance difference with GSR as compared to the three-targets scene in Example 1.
Reconstructions with a random selection of four transmitters and four receivers, and a set of four transmitters and eight receivers, both sets chosen at random, yielded similar quantitative performance trends. The two quantitative performance metrics, averaged over 100 Monte Carlo trials, are plotted vs. SNR in Figure 10 for TVR, GSR, and standard -norm reconstruction schemes. For EMD, the variance is also indicated in Figure 10. Again, we observe that the performance of GSR and TVM is comparable in terms of RCP, while that of standard -norm reconstruction is significantly lower. Unlike the three-target scene in Example 1 wherein TVM performance in terms of average EMD was approximately half-way between that of GSR and standard -norm reconstruction, the EMD curve for TVM in the larger composite target case closely follows the corresponding curve for GSR. TVM's capability of edge preservation better manifests itself in case of the composite target, leading to smaller performance difference with GSR as compared to the three-targets scene in Example 1.
Reconstructions with a random selection of four transmitters and four receivers, and a set of four transmitters and eight receivers, both sets chosen at random, yielded similar quantitative performance trends.
Discussion
The results provided in Section 3.2 quantify and validate the superior performance of the GSR and TVM approaches over the standard -norm reconstruction for non-point-like targets, especially at low SNR values. A comment is in order on the choice of regularization/penalty parameters for the employed sparse reconstruction methods. The Primal YALL1 group solver requires setting of a length-2 penalty parameter vector [27], whereas NESTA, employed for TVM and standard -norm reconstructions, requires the specification of smoothing and stopping parameters. Both the smoothing and stopping parameters in NESTA should be set to small values for higher accuracy or large values for faster convergence [36]. In general, choosing a small value for the smoothing parameter warrants a small value of the stopping parameter. For large smoothing parameter value, the stopping parameter can also be larger. Note that setting the smoothing parameter equal to zero results in use of the standard "non-smoothed" version of the -norm in the optimization problem. For the Primal YALL1 group solver, the elements of the length-2 penalty vector are set as inversely proportional to ‖ ‖ , where ‖•‖ denotes the infinity norm of the argument [23].
For the considered numerical experiments, we set these parameters for NESTA and YALL1 empirically for a nominal number of transmitters and receivers under noise-free conditions following the aforementioned guidelines, opting for higher accuracy over faster convergence for NESTA. For NESTA, no adjustments were made when the total number of transmitters and receivers were increased or decreased compared to the nominal case. However, the proportionality constants for the penalty parameters in YALL1 group had to be adjusted to account for the change in ‖ ‖ with
Discussion
The results provided in Section 3.2 quantify and validate the superior performance of the GSR and TVM approaches over the standard l 1 -norm reconstruction for non-point-like targets, especially at low SNR values. A comment is in order on the choice of regularization/penalty parameters for the employed sparse reconstruction methods. The Primal YALL1 group solver requires setting of a length-2 penalty parameter vector [27], whereas NESTA, employed for TVM and standard l 1 -norm reconstructions, requires the specification of smoothing and stopping parameters. Both the smoothing and stopping parameters in NESTA should be set to small values for higher accuracy or large values for faster convergence [36]. In general, choosing a small value for the smoothing parameter warrants a small value of the stopping parameter. For large smoothing parameter value, the stopping parameter can also be larger. Note that setting the smoothing parameter equal to zero results in use of the standard "non-smoothed" version of the l 1 -norm in the optimization problem. For the Primal YALL1 group solver, the elements of the length-2 penalty vector are set as inversely proportional to Θ H y ∞ , where · ∞ denotes the infinity norm of the argument [23]. For the considered numerical experiments, we set these parameters for NESTA and YALL1 empirically for a nominal number of transmitters and receivers under noise-free conditions following the aforementioned guidelines, opting for higher accuracy over faster convergence for NESTA. For NESTA, no adjustments were made when the total number of transmitters and receivers were increased or decreased compared to the nominal case. However, the proportionality constants for the penalty parameters in YALL1 group had to be adjusted to account for the change in Θ H y ∞ with any increase or decrease in the number of transmitters and receivers employed. Thus, compared to YALL1 group, NESTA was found to be more robust to changes in the amount of data employed for the sparse reconstructions.
Conclusions
In this paper, we conducted qualitative and quantitative performance evaluations of group sparse reconstruction and total variation minimization approaches for radar imaging through stratified subsurface media. The TVM approach minimizes the gradient of the image resulting in good edge preservation, while group sparse approach exploits prior pixel neighborhood information about extended targets for reliable imaging. Under reduced number of transmitters and receivers, numerical EM measurements with varying SNR levels were considered. Both TVM and GSR approaches demonstrated comparable qualitative performance for different subsurface scenarios. The quantitative evaluation revealed a slight performance advantage of GSR over TVM. However, this advantage came at the cost of reduced flexibility in setting regularization/penalty parameters for GSR vs. TVM. | 6,696.2 | 2019-10-30T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
Changes in MRI Workflow of Multiple Sclerosis after Introduction of an AI-Software: A Qualitative Study
The purpose of this study was to explore the effects of the integration of machine learning into daily radiological diagnostics, using the example of the machine learning software mdbrain® (Mediaire GmbH, Germany) in the diagnostic MRI workflow of patients with multiple sclerosis at the University Medicine Greifswald. The data were assessed through expert interviews, a comparison of analysis times with and without the machine learning software, as well as a process analysis of MRI workflows. Our results indicate a reduction in the screen-reading workload, improved decision-making regarding contrast administration, an optimized workflow, reduced examination times, and facilitated report communication with colleagues and patients. Our results call for a broader and quantitative analysis.
Introduction
Multiple sclerosis (MS) is an autoimmune chronic demyelinating disorder affecting the central nervous system.The primary features of this potentially disabling disease include inflammation and neurodegeneration.MS manifests as lesions, caused by axonal or neuronal loss, demyelination, and astrocytic gliosis [1].There are four major MS types as initially defined by the International Advisory Committee on Clinical Trials in Multiple Sclerosis in 1996 [2]: relapsing-remitting MS (RRMS), primary-progressive MS (PPMS), secondary-progressive MS (SPMS), and progressive-relapsing MS (PRMS).Patients with MS are typically diagnosed in young adulthood with a mean age of first diagnosis of 32 years [3,4].Regular MRI measurements are of outstanding importance in the monitoring of MS patients to scan for signs of failure of the current medical regime [5], which could lead to irreversible neurologic deterioration.Despite advancements in understanding MS' pathogenesis, there is still a lack of specific biomarkers, necessitating reliance on clinical diagnosis and imaging for patient management [6].There are revised guidelines by three international expert groups of neurologists and radiologists to provide standardized protocols for MRI in MS diagnosis and follow-up [7].
MS affects about 250,000 patients in Germany [8] and 2.8 million patients worldwide (35.9 per 100,000).The pooled incidence rate of 75 reporting countries is 2.1 per 100,000 persons/year [4].The Department of Neurology at the University Medicine Greifswald has a crucial role in the care of MS patients in North-Eastern Germany.The MS outpatient clinic cares for about 750 MS patients per year, and the neuroradiologic department carries out a large portion of the outpatients' MR imaging.Initial and follow-up MRIs lead to a considerable amount of work hours for radiologists to guarantee adequate image acquisition, evaluation, and reporting.The augmented workload requires a closer examination of management and optimization strategies, involving working routines [9].The rising demand and limited capacities for MR imaging require an effective workflow, which can be especially challenging in an academic environment [10].Machine learning (ML) as a subfield of artificial intelligence (AI) is a promising promoter of simplifying working routines, especially when based on homogeneous source data [11], i.e., high-quality MRI sequences of MS patients.By implementing ML software (Version 4) into daily working routines, radiologists may be able to improve reporting accuracy, workflow efficiency, and interdisciplinary communication [12][13][14].There are several quantitative volumetric reporting tools, which are commercially available and aim to improve radiologists' accuracy in the interpretation of the MRI examinations of patients with multiple sclerosis [15].Mdbrain ® [16][17][18][19][20][21] is a machine learning-based software which uses commonly standard MRI sequences to perform brain volume measurements, analyses of gray and white matter volumes, as well as white matter lesions.Mdbrain ® provides the radiologist with an automatic quantification of the lesion load of an MS patient, including a comparison with previous MRI examinations, if available.ML software support and the associated acceleration of the diagnostic process should give the radiologist more time for other tasks at hand, i.e., communication of findings, managing examination logistics, or academic/scientific challenges [22].
Additionally, the repeated administration of Gadolinium-containing contrast media has been generally debated [23,24].While the use of contrast agents is essential in the initial MRI diagnosis of multiple sclerosis, their additional value in follow-up examinations is in doubt [25,26], and is considered optional, according to the current clinical guidelines in Germany.A potential benefit of administering contrast agents in follow-up MRI assessments is the identification of patients with active inflammation, as these individuals may potentially benefit from a pharmacological intervention with corticosteroids [27].
The acceptance of a new technology in a daily work routine does not only depend on medical or logistic benefits, but also on user acceptance and the transferability of new processes into existing structures [28].To this day, the effects of the implementation of ML software on the working routines of radiologists, radiologic technicians (MR technicians), and clinicians, as well as the satisfaction of users and patients, are not fully examined [15].This qualitative study [29] offers insights into the implementation process of the ML software mdbrain ® in the clinical MRI examination routine of patients with multiple sclerosis.
Materials and Methods
A qualitative investigation was conducted between November 2022 and March 2023 by the Department of Business Administration and Healthcare Management in collaboration with the Institute for Radiology and Neuroradiology at the University of Greifswald.Data collection was based on problem-centered expert interviews involving four experienced radiological residents, one neuroradiologist, one technical radiology assistant, and one neurologist, who specialized in MS treatment.In addition to questions about the process flow in routine care, the interview guideline covered aspects of professional roles, responsibilities, external factors influencing the workflow, and opinions on the use of ML software.The selection of the interview partners was based on the preliminary process analysis.The guideline for the expert interviews was divided into different sections.The first section contained questions on the interviewees' professional group, task, and involvement in the process.In addition, the process-influencing areas of time, workload, quality, resource utilization, information, workflow, and communication were addressed.Depending on the professional group surveyed, the key questions were adapted.The interviews were documented through audio recordings and then transcribed.Through these interviews, the process pathways of MRI examinations with and without the ML software mdbrain ® were elucidated.Interview information was used for highlighting which parts of the daily professional routines of involved personnel would be subject to adaptation if the ML software was implemented.Key interviewee statements were deduced, summarized, and supported with quotation examples.
After the preliminary analysis, an in-depth exploration of the MRI examination routine was undertaken through process identification and analysis.Process identification aimed to comprehensively delineate all relevant (sub-)processes associated with MRI examinations and their subsequent interpretation.The ensuing process analysis, built upon the data gleaned from process identification, sought to assess the impact of the machine learning software (mdbrain ® ) on the workflow of MRI examination and interpretation.
Mdbrain ® is a licensed MRI post-processing software provided by Mediaire GmbH, Berlin, Germany (https://mediaire.ai/), and certified [30,31].Mdbrain ® operates as a PACS (Picture Archiving and Communication System)-integrated module.As a supporting tool for MS diagnostics, mdbrain ® creates two quantitative reports: a volumetry report and a lesion report.The volumetry report provides a quantitative assessment of the brain volume in comparison with a reference collective.To generate this report, mdbrain ® uses a three-dimensional T1-weighted gradient echo MR to segment 3 tissue classes (white matter, grey matter, cerebrospinal fluid) and 21 brain regions.Mdbrain ® uses a deep convolutional neural network (U-Net) trained with annotated ground truth data of more than 1000 heterogenous patient data sets from different scanner types and sequences.The volumes of these regions are then quantified, and the corresponding percentiles are calculated by comparing the measured patient's volumes to the volumes of a healthy population (8500 healthy people) with the same covariates (age, sex, and total intracranial volume).When multiple scans of the same patient are available, a longitudinal analysis is included in the report, demonstrating possible brain atrophy dynamics over time.Volumes and percentiles are displayed in tabular format, along with clinically relevant MR slices and segmentation masks (Appendix A as example report).Additionally, a lesion characterization reporting gives a comprehensive quantitative assessment of the brain lesion load to facilitate the accurate diagnosis and subsequent monitoring of MS progression.Lesions (white matter hyperintensities) are automatically segmented from a fluid-attenuated inversion recovery sequence (i.e., FLAIR) using a deep convolutional neural network trained on the annotated data of more than 500 heterogenous patient data sets from different scanner types and sequences.After segmentation, the lesions are classified into four regions: periventricular, juxtacortical, infratentorial, and deep white matter.This classification is based on a decision tree that uses multiple image-based features as input, i.e., lesion size and location, grey/white matter ratio, form factors.The lesion count and the total lesion volume are reported for the entire brain and for each region separately.When multiple scans of the same patient are available, mdbrain ® performs a longitudinal analysis and classifies lesions as either "old", "new", or "enlarged" (Appendix B as example report).This longitudinal analysis is based on a separate deep convolutional neural network, which uses multiple image-based features of the old and the new scan [32].Both the lesion characterization and the volumetry report are made available to the PACS within minutes after image acquisition (average processing time of volumetry within 5 min for computers with a suitable graphical processing unit).Mdbrain ® is an off-the-shelf, built-in software that was not specifically adapted.The training/test data of the algorithm did not contain data from the University Medicine Greifswald.Automated solutions for data transfer and the initiation of the analysis by the ML software are available (auto-pull, auto-routing).In the workflow of the University Medicine Greifswald, the radiological technician manually sent the initially acquired two MR sequences: isometric T1-w (slice thickness 1 mm, no gap, TR/TE 2100/2.5, matrix 256, FOV 250 × 250) and isometric t2_FLAIR (slice thickness 1 mm, no gap, TR/TE 7000/379, matrix 256, FOV 250 × 250) or 2D-FLAIR into the PACS, and the radiologist in charge had the choice to manually send it to a closed, local server hosting the ML software, initiating the software analysis.During the ML software analysis, further MRI sequences were acquired (DWI 4 mm, T2 sagittal 2 mm, T2 fat sat.coronal 2 mm).
The radiologist then decided if additional contrast-supported sequences should be added, depending on the information of the ML report and/or conventional lesion comparison.
The interview information was contextualized with readily available information about MS MRI examinations from the hospital information system/radiology information system (HIS/RIS) software from 2019 to 2022.Parameters such as the proportion of MS MRIs with the use of the ML software, annual examination frequencies, MRI machines/field strength, homogeneity of utilized MRI sequences, and mean duration of measurement protocols were systematically obtained from HIS (Version 7.2.2) [33]/RIS (Version 70.0.15115.0)[34] software.There were documented time stamps for the MRI sequences and the final proofreading of the diagnostic report, but not for the duration of the initial creation process for the diagnostic report.Since there were no readily available data about the written report times, we conducted an exemplary time measurement of the diagnostic evaluations by medical personnel without the assistance of a machine learning software, which involved assessments by 5 radiologists (1 neuroradiologist and 4 experienced radiological residents with >5 years of professional experience).From a pool of 25 randomly chosen complex MS MRIs with at least 10 lesions, each physician was assigned to evaluate the T1-and FLAIR-images of 5 randomly chosen MRIs and their respective preliminary examinations.Due to randomization, 4 MRIs were analyzed twice (by different radiologists) and 4 were not chosen at all.Accordingly, a total of 21 assessment times were recorded by 2 less experienced radiologists (<1 year of professional experience) without ML support.The documented times of the radiologists to compare the T1 and FLAIR sequences of the current MRI with those of the respective preliminary examination were measured.The goal for the radiologists was to capture all the relevant pathologies necessary for a written report and, if necessary, to recommend supplementary contrast administration.For comparison, we measured the times it took for two inexperienced radiologists (<1 year of professional experience) to capture all the relevant information from each of the conventionally analyzed MRIs with the assistance of ML software.Since the same MRIs were assessed with and without ML support, paired t-tests with SPSS Statistics (version 29.0.0.0;IBM, Armonk, NY, USA) were performed to test for significant differences in the mean assessment times (significance level = 0.05).The participating personnel were familiar with the conventional as well as with the ML software-supported way of evaluating MS MRIs.In the daily work routine, the radiologists are advised to control all ML findings for potential errors.Additionally, all findings (including those implemented through ML) are analyzed and corrected if necessary by a neuroradiologist before final approval.We used contrast agent administration rates, based on billing data, for a comparison of the proportion of examinations, in which contrast agents were administered.
Results
To answer the question of how the introduction of the ML software has influenced the workflows of radiologists, radiologic technologists (MR technicians), and clinicians, we initially aimed to objectify the workflows through process visualization and descriptive analyses.Between 2019 and 2023, an average of 352 MRI head examinations of MS patients per year were performed, for MS purposes alone.Data homogeneity was good with >80% of the examinations performed at 3T field strength (MAGNETOM Vida and MAGNETOM Skyra Fit), and a standardized sequence protocol for MS examinations, only deviating in initial examinations, with additional optic nerve sequences.The ML software mdbrain ® was gradually implemented in the workflow, starting in the last quarter of 2021.The usage of the product by the attending personnel was optional.The portion of MS examination with a report from the ML software rose within 3 months to over 85 percent.
Figures 1 and 2 show the respective MRI workflow with and without the use of the ML software.Decision situations with and without contrast agent administration are shown in each case.If the patient has the first MS MRI or describes a change in symptoms since the last follow-up MRI, an intravenous line is inserted right after the patient has been informed and given consent.First, native images (T1 and FLAIR) and contrast-enhanced MRI images are acquired sequentially.At the end of the MRI examination, the patient is discharged.Depending on the radiologist's availability, the MRI images are analyzed, comparing the newest images with previous findings.The MRI examination is finalized by writing the report.An interim evaluation of the MRI images does not usually take place.If the patient does not initially describe any change in symptoms for an MRI examination without the ML tool (Figure 1), only native MRI images are initially taken after the patient gives informed consent.After the images (T1 and FLAIR) have been transferred to the PACS system manually by the MR technician, the radiologist analyzes the images compared to previous lesion sizes and numbers, and possible signs of new or progressive atrophy.During this time, the patient remains in the MRI scanner, where further sequences are acquired.If there are any abnormalities, further contrast-enhanced images are supplemented.The crucial time factor at that point is that a radiologist must first be available to carry out a comprehensive assessment of the images.If the radiologist is busy with other examinations or other clinical activities, there will be waiting times for the MRI patients and staff.
For the workflow using the ML software (Figure 2), a check is carried out before the patient enters the MRI examination to determine whether previous findings from earlier examinations are already available.If this is the case, the radiology assistants check whether these findings have already been analyzed using the ML tool.If not, this is conducted subsequently.The corresponding report is then already available for comparison for the upcoming examination.Once the patient has given their informed consent, native MRI images are taken, as in the classic workflow.The radiological technician manually sends the initially acquired MRI sequences (isometric T1-w, and isometric t2_FLAIR or 2D-FLAIR) to the PACS, and the radiologist in charge manually sends it to the closed, local server hosting the ML software, initiating the software analysis.A data transfer to external servers is not necessary.There are already automation solutions for this process step that prevent potential waiting times due to the pending transfer.Automation solutions for this process step have not yet been implemented at the Institute for Radiology and Neuroradiology at the University of Greifswald.The MRI images are then analyzed, and the lesion report is generated by the ML tool.This usually takes less than five minutes and therefore covers approximately the time the patient remains in the MRI scanner for further images anyway.The lesion report is used as the basis for a radiological assessment and a decision on the need for further contrast-enhanced images.Using the ML tool seems to offer especially a reduction in the time to decide on further contrast-enhanced images.The radiologist must now review the lesion report, in which potential changes to previous examinations are already identified by the ML tool.The time required to evaluate the native MRI images and compare them with previous findings could be reduced compared to the classic approach, depending on the time intensity of conventional comparisons.
Following the introduction of the ML software, radiologists, MR technicians, and physicians emphasized the following five aspects of its use in routine care: 1. Workload and Efficiency: The integration of the ML software into the MS MRI workflow simplified the decisionmaking process.By directing attention to specific lesions/areas of interest, the ML report simplified the manual examination process.The routine task of lesion counting was shifted to the ML software, which freed up radiologists to focus on other tasks at hand.It was mentioned that the algorithm also showed promise in being more sensitive in detecting lesions, especially in complex cases with a large lesion count.It was suggested that the ML software might speed up decision-making, helping physicians to decide on further actions, like administering contrast agents or changing the therapeutic regime.
Systematic Errors vs. Human Interpretation:
The errors made by the ML software were perceived as rather systematic, potentially differing from the variations in an individual radiologist's interpretation or the inter-observer differences of different radiologists.The algorithm apparently showed a high negative predictive value, making it potentially beneficial for detecting lesions and accurately identifying the absence of new ones.Citation 3. "From my perspective, one would methodologically describe it as having a high negative predictive value.This algorithm detects many lesions that may not actually exist, but when the algorithm indicates 'there is no new lesion', there typically is indeed no new lesion.[. ..]I always find that, at a qualitative level, there is a significant distinction when it comes to errors; [errors made by the machine learning software] tend to be systematic in nature.This is in contrast to a radiologist who might have good and bad days.The quality of their interpretation may differ when the radiologist writes the report first thing in the morning versus [...] at 10:30 p.m. in the evening.When [a patient] has 173 lesions, and now you're [required] to find the 174th, it's not ideal.In such cases, an algorithm ultimately proves to be not only faster but also more sensitive".(neurologist)
Lesion Analysis and Communication:
The ML software could make quantitative lesion analysis easier, providing information that might be challenging or impossible to obtain manually.The software might enable more precise location descriptions of new lesions, which could improve communication between doctors and patients.Especially post-examination communication was reportedly faster and more efficient.The reports generated by the ML software were reported to be visually appealing and easily accessible to patients, potentially enhancing the quality of communication in doctor-patient interactions.This applies not only to the initial diagnosis of MS, but also to the context of the course of the disease.The reports can be used to transparently communicate the MRI recognizable course of the disease and make any new clinical symptoms that may have emerged easier to understand.In a competitive environment, an improved patient loyalty was considered plausible.Citation 4. "So, [the lesion load] is presented in a way that's easy for patients to understand, which I think is excellent.I therefore believe the quality of communication between doctor and patient is improved through it.[. ..]Communication between doctor and patient, in my opinion, seems to have decreased.This is because [communication] is not necessary in every case now.But communication now is perhaps a bit more targeted.[. ..]I find the presentation of this report, as it is visually designed, to be very appealing, and it is some-thing that is highly accessible to patients".(neurologist) Citation 5. "Well, there's less communication between doctor and patient before the MRI examination.However, if desired, communication after the examination is much faster and easier.[. ..]The volume of the lesions measured by the machine-learning software is something I hadn't previously noticed.[...] Measuring the volume of each individual lesion manually would be unrealistic and not feasible during the course of a shift".(radiologist 2)
Time Savings and Workload Distribution:
The utilization of machine-based software products for lesion assessment enables radiologists to make quick and flexible decisions regarding the necessity of contrast-enhanced imaging studies.In our sample analysis (Table 1), the experienced radiologists (>5 years of training) required an average of 296 s per MRI to capture all the relevant contents for complex MS MRIs (w/o ML).Thereby, the time spent is independent of the number of lesions; there is no statistical correlation (Figure 3).With the support of the ML software, the inexperienced radiologists (<1 year of training) took an average of 82.4 s to capture all the relevant contents in the same MS MRIs (avg.w ML).When using ML, the assessment time to capture all the relevant information from an MS MRI is significantly shorter; on average, the time for the assessment is reduced by 210 s.In addition, as can be seen from the boxplot, the spread of the examination times is significantly narrower when ML is used.Regardless of the measurement times, the perception of time savings was confirmed by the process participants interviewed: Regardless of the measurement times, the perception of time savings was confirmed by the process participants interviewed: Citation 6. "Additionally, the time it takes for the program to generate a report for me is roughly five minutes.[...] It would take [a reporting radiologist] significantly longer to manually do it at this level of thoroughness and conscientiousness.[. ..]While the program is running in the background, the reporting radiologist is able to work, and then make an ad-hoc decision whether further MR sequences are necessary.This definitely saves a significant amount of time".(radiologist 2) Citation 7. "When new lesions are assessed by the machine-learning software, you can provide a precise description of their location, which wouldn't be feasible with manual assessment.[. ..]If you are familiar with the colors [which highlight each lesion's location], the reporting radiologist can assess the report created by the program and form a preliminary result within less than ten seconds.Because it's color-coded, you can immediately tell if a lesion is new or known.In clinical routine, the longest part [of the process] is usually waiting for the program's report".(radiologist 1)
Contrast Media
We did not find a significant reduction in the number of contrast-enhanced examinations after the implementation of the ML software, comparing the fraction of contrastsupported MRI examinations from 2019 to 2023.However, it must be emphasized that a new MRI guideline for MS was published during the period under review, which now recommends the avoidance of contrast agents in the follow-up examinations of MS patients.Against this background, the effects of the now no longer recommended use of contrast media overlap with the faster and more objective decision for or against the use of contrast media based on the ML report.Citation 8. "[...] By allowing us to evaluate lesion load simultaneously [to the MRI], AI enables us to be more flexible in decision-making on whether we want to add contrastenhanced sequences.[This] is advantageous in avoiding unnecessary placement of intravenous cannula and administration of medication".(radiologist 1) Citation 9. "The program quickly helps me decide whether to administer the contrast agent or not, and I believe this really does reduce the use of contrast-agent in general".(physicians 2)
6.
Drawbacks and Difficulties The procurement and installation of a server specifically configured for this purpose were necessary.Furthermore, coordination with the legal department, IT department, and data protection was required.
The interviewees mentioned the weaknesses of the ML software in the posterior cranial fossa and generally in artifact-prone areas.
Citation 10. "The ML software usually detects a little more than I do, but sometimes those are lesions that aren't actually real; they're just MRI artifacts that occur.So, I critically examine them to determine whether they're genuine or not".(radiologist 2) Citation 11. "We had initial issues with the lesions located in the posterior cranial fossa.It's already a bit better.It's not yet optimal, but it has improved.So, I know I need to check ML findings here".(radiologist 1)
Discussion
The implementation of the ML software in the MRI examination workflow for patients with multiple sclerosis (MS) seems to yield several advantageous effects.Introduced as an optional support software for the radiologists in charge, the ML software was accepted rapidly, becoming the preferred way of analyzing an MS MRI.The interviewed radiologists highlighted a reduction in the screen-reading workload as a reason, perceiving manual lesion counting as a laborious task.The interviewees described a shift from time-consuming lesion comparisons to rather checking predefined results, allowing a reallocation of resources towards other tasks at hand.The reduction in the workload was accordingly pronounced in cases involving complex findings with a high number of lesions or complexly confluent lesions, which are more challenging to compare with the classic approach of lesion counting [35].The adoption of the ML software seems to create significant time savings for the radiologist, reportedly taking approximately five minutes compared to a considerably longer time for manual reporting.A reduction in screen reading times seems to be confirmed in our sample of report times, in which inexperienced radiologists with ML software were 3-4 times quicker than experienced radiologists without, and comparably quick to a neuroradiologist without ML software.Furthermore, the range of times required when using ML is significantly narrowed, which considerably facilitates the planning of personnel deployment in the long term.The background operation of the software allowed radiologists to work simultaneously.The accuracy of the ML software was described as reliable.One radiologist even stated that human radiologists could not possibly achieve the level of precision of the ML software in complex lesion patterns.However, it is essential to further objectify the reliability of using the ML software in this context, as missed lesions could delay necessary alterations in therapeutic regimes aimed at preventing neurologic symptoms and functional deterioration.An improvement in the precision and comparison of the lesions in radiological reports seems likely, and thus supports the results provided in the study by Barnett et al. on the comparability of ML-based evaluations [36].In the radiologists' experience, the usage of the ML software tended to lead to a higher sensitivity and a lower specificity, due to often artifact-related false-positive findings.The interviewees described the typical errors of the ML software as false positive and rather systematic, in a way, that they often occurred in artifact-burdened regions of the brain, i.e., close to the skull base.Those false findings might be easier to identify since they are typically located.In comparison, the variations in a radiologist's interpretation might be more diverse, being influenced by factors like workload and fatigue towards the end of a workday.By optimizing workflow and curtailing examination durations, the software could enhance efficiency, allowing for the examination of a greater number of patients within a given timeframe.This improved cost-efficiency is appealing to healthcare facilities aiming to increase patient throughput.Moreover, it could shorten patients' waiting times in situations of relative MRI scarcity.The influence of the ML software's role on decision-making processes regarding the necessity of contrast application might be noteworthy as well.Although there is no evidence of contrast media residues with macrocyclic contrast media, the current MS guidelines call for a well-reasoned approach if contrast media are necessary or not.Optimizing contrast applications could contribute to material and time conservation and the objective of mitigating potential patient risks associated with the administration of contrast media.In the future, the implementation of additional sequences might additional value to ML software performance.The central vein and peripheral rim signs are examples of promising findings in susceptibility-weighted imaging (SWI), which could further improve the accuracy of ML software assessment [37][38][39][40].The generation of structured and easily comprehensible reports by the ML software facilitates communication between radiologists, patients, and clinicians.The lesion report generated by the ML software was highlighted as beneficial by the neurologist, as it provided a comprehensive overview of the development of brain lesions.Changes in the lesion load or atrophy could thus be easily assessed by non-radiological personnel and patients, leaving only borderline findings open for radiologic feedback.The reports facilitated the communication of findings, giving the patients understandable information about their current disease burden, and possibly necessary alterations of the therapeutic regime.The report may also foster physician-patient relationships and improve patient loyalty, especially in an out-patient setup.
Limitations: The results reflect the experiences of a limited number of employees from a single MS-treating facility, and therefore do not allow for universally applicable conclusions.In particular, the representation of the patient's perspective is based solely on the secondary depiction provided by the treating neurologist, although studies underline the patient's high interest in MRI education [41,42].The validity of ML software lesion counts should be further objectified and compared to human precision.Although human lesion analysis remains the gold standard, superior human performances in highly complex MS examinations seem doubtful and other study results already indicate this [36,42,43].While the study aimed to assess the general impact of the mdbrain ® software, the assessment was primarily focused on depictions of the diagnostic and workflow benefits by medical personnel.The report time measurements involved a small sample size, which may limit the generalizability of the findings.However, this result is consistent with the results of a study on ML-assisted breast cancer diagnostics, which showed a significantly lower workload with a similar cancer detection rate [44].Other aspects, such as communication patterns, decision-making processes, and overall efficiency, may not have been fully captured.The contextualization of interview information with historical data from the HIS/RIS software might not fully reflect current practices or future trends in MS MR examinations.Mdbrain ® is a proprietary software and can thus not be fully described methodically.The use of commercial software for transcription may introduce errors or biases during the transcription process.Additionally, the accuracy and completeness of the historical data could impact the validity of the comparisons made.For example, the approach to use billing data to estimate changes in the use of contrast media was abandoned due to data inconsistencies.The report time measurements involved a small sample size, which may limit the generalizability of the findings.
Conclusions
In conclusion, our study suggests that the integration of ML software products such as mdbrain ® in an MRI workflow offers opportunities to streamline processes, enhance efficiency and precision, and improve communication with other physicians and patients.However, it is essential to acknowledge the limitations and uncertainties associated with our findings and continue exploring the broader implications of such implementations in further studies.
Informed Consent Statement:
The necessity of obtaining written consent from the patients was waived, as the study involved a retrospective examination of the evaluation methodologies of clinically necessary MRI examinations, and no personal patient information was utilized.The interviewees voluntarily agreed to participate in the study.The interviews took place during working hours and were not otherwise compensated.
Figure 2 .
Figure 2. MS MRI workflow with ML tool.Citation 1. "The process of decision-making is made significantly [...] less complicated.[The machine learning software] directs you as to which lesions to examine.[...] the task
*Figure 3 .
Figure 3. Influence of the utilization of machine learning on the time needed to capture relevant data per MS MRI, n = 25.(a) Time to capture MS MRI depending on the number of lesions.(b) Time to capture an MS MRI with and without the use of ML (w/o ML: without machine learning; w ML (1): with utilization of machine learning, radiologist 1; w ML (2): with utilization of machine learning, radiologist 2; avg.w ML: average time to capture an MS MRI with utilization of machine learning).
sequences are necessary.This definitely saves a significant amount of time."(radiologist 2) Citation 7. "When new lesions are assessed by the machine-learning software, you can provide a precise description of their location, which wouldn't be feasible with manual assessment.[…] If you are familiar with the colors [which highlight each lesion's loca-
Figure 3 .
Figure 3. Influence of the utilization of machine learning on the time needed to capture relevant data per MS MRI, n = 25.(a) Time to capture MS MRI depending on the number of lesions.(b) Time to capture an MS MRI with and without the use of ML (w/o ML: without machine learning; w ML (1): with utilization of machine learning, radiologist 1; w ML (2): with utilization of machine learning, radiologist 2; avg.w ML: average time to capture an MS MRI with utilization of machine learning).
Table 1 .
Time for the review of MS MRIs with and without ML. | 7,582.4 | 2024-05-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Multi-modal Landslide Monitoring Data Fusion Algorithm Based on Resistivity Imaging
In the process of landslide deformation monitoring, the indicators of monitoring system based on surface 3 displacement cannot accurately reflect the deformation evolution law of deep geotechnical body. Although the joint 4 time curve of deep displacement monitoring of borehole and related monitoring data can reflect the deformation 5 characteristics inside the slope body, it cannot spatially describe and explain the overall deformation process of 6 geotechnical body completely due to the limitation of technical conditions such as borehole. In this paper, using the 7 characteristics of resistivity imaging technology with fast and accurate collection of electrical information of 8 subsurface medium and multi-dimensional imaging, we take resistivity imaging data as complete modal data and 9 fuse deep displacement and groundwater level and other modal data. Through joint depth matrix decomposition and 10 optimization, layer-by-layer modal semantic matching and updating, the distribution and representation differences 11 of modal data are compensated, and the analysis tasks such as classification and clustering of incomplete 12 multimodal data are completed, and the inversion results of resistivity data are updated according to the output 13 modal shared eigenvalues to realize effective multidimensional imaging monitoring of the internal deformation 14 process of landslide geological bodies. 15
Introduction 17
Landslide geological hazards are complex physical systems with a long time evolutionary process (Xu et al. 18 2008). Studied in the direction of evolutionary mechanism, landslides are the result of the joint action of 19 fundamental, action and coupled fields generated by the structural, seepage, stress, chemical and temperature fields 20 Geological hazards are characterized by frequency, hazard and complexity, and it is difficult for a single 31 monitoring means to accurately reflect the landslide evolution process, and the integrated analysis and fusion of 32 data obtained using different methods such as multi-temporal and multi-scale displacement monitoring, 33 hydro-meteorological, and geological monitoring, and geotechnical interaction monitoring of the sky-lands is of 34 great significance for predicting geological hazards (Lin et Multimodal information can describe the same data instance from different sides, and effective analysis of 48 multimodal complementary information can obtain a more reasonable representation of data characteristics. The 49 main causative factor of landslide generation is the weakening of soil shear strength in the process of rainwater 50 infiltration caused by atmospheric precipitation, and the slope-sliding force is greater than the soil shear resistance. 51 In this process, different stratigraphic structures within the geotechnical body, with the infiltration of rainwater, will 52 form obvious resistivity differences near the slip surface. The resistivity imaging technique, based on the significant 53 differences between the material composition, porosity, structure, and water content of the landslide weak body 54 (face) and the surrounding rock (Carlo et al. 2013;Yin et al. 2018), measures the electrical conductivity 55 information of the subsurface medium by scanning a large-area electrode array and can obtain a complete 56 multidimensional electrical data set, reflecting the internal structure of the geological body. Therefore, a 57 multimodal dataset consisting of monitoring data such as resistivity imaging data, deep horizontal displacement, 58 soil moisture, groundwater level and rainfall can effectively reflect the process of obtaining structural deformation 59 inside the landslide geotechnical body (Shao et al. 2013;Yin et al. 2017). 60 The multi-source data fusion technology can comprehensively analyze and reasonably utilize the multi-source 61 heterogeneous data of landslide monitoring, eliminate the possible redundancy and mutual exclusion between data, 62 and make all kinds of data complement and cooperate with each other, thus effectively improving the reliability of 63 landslide monitoring data and increasing the utilization rate of landslide monitoring data (Qiu 2017; Zhao et al. In large landslide-monitoring sites, large-area, multi-dimensional resistivity data collection is required. electrodes (data transmission between mainframe and sub-stations through CAN bus, the number of connected 130 sub-stations can be expanded as needed). For different monitoring environments, the layout of the host, sub-stations 131 and smart electrodes can be flexibly adjusted (Fig.3), is a layout designed for the need of long-distance monitoring 132 of high slopes. 133 Aiming at the migration process of underground seepage field in landslide geological disaster evolution, which 134 is often irregular and its transport speed and direction have the characteristics of sudden change, the initial scanning 135 and collection of resistivity is performed in a large range (cross-electrode power supply and data collection) by 136 rapidly changing the collection area. After determining the range of landslide-hidden trouble spots, the intelligent 137 electrodes are encrypted and the electrode network is automatically coded, and the working state of electrodes is set 138 according to the measurement needs, and the measurement is realized by completing the conversion of each 139 electrode state through the host control to realize the resistivity monitoring data collection with real-time and 140 dynamic variable monitoring point density and multi-dimensional resistivity dynamic collection grid structure in 141 In the actual situation of the landslide geological hazard monitoring process, there is a large correlation 151 between resistivity imaging technology, deep displacement and related monitoring data, and the modal feature 152 information of each monitoring data can describe the same data instance from different sides, but it is difficult for 153 various monitoring data to constitute a modal data set with complete feature values in time and space due to 154 technical conditions, to achieve effective analysis of multi-modal complementary information and be able to allow 155 for effective analysis of multimodal complementary information and a more reasonable representation of data 156 characteristics. The multimodal data fusion algorithm for landslide geology body deep displacement monitoring 157 ensures the local similarity of each modal data by encoding the geometric structure of the data with graph 158 regularization factors, constructs a deep semantic matching model that fuses modal deep neural networks and 159 incomplete multimodal matrix decomposition, and then updates and optimizes the model ( . By jointly training and optimizing the modal private depth network and the base matrix, as well as 165 the modal consistent encoding matrix, multimodal depth semantic shared features in the subspace will be obtained. 166 The flowchart is shown in Fig.5. 167 To ensure the consistency of each modal data with its geometric structure in the potential subspace, the learned 168 shared encoding matrix is C P represented regularly by the invariant graph model. Assuming that there are two data 169 instances close to each other in the original data space The incomplete multimodal deep semantic matching model can be represented as 185 By sharing the characteristics C P and ) direction, and about 120 m wide in the north-south direction, and the natural slope angle of the slope surface is 209 30°-37°. The cover of the front edge is mainly residual slope deposits and crumbling slope deposits, with a small 210 number of fully weathered mudstone chips, while the middle and back edge cover are mudstone with different 211 degrees of weathering respectively. The lithology is dominated by calcium-bearing mudstone in the upper part, 212 calcareous siltstone in the middle part, interspersed with calcium-bearing mudstone, and conglomerate, sandstone 213 and conglomerate-bearing sandstone in the bottom part. This stratigraphic structure is favorable for rainwater to 214 continuously replenish groundwater from top to bottom and infiltrate into the lower mudstone, and the muddy 215 debris and weak interlayer in the rock layer are immersed in the water for a long time, which will cause the strength 216 of the soil body to decrease. Under the influence of multiple effects such as self-weight of the landslide body, 217 rainfall infiltration, and vibration caused by human engineering activities, the cohesive force inside the landslide 218 body gradually decreases, i.e., the slide force continues to increase due to rainfall infiltration and other effects, 219 while the anti-slip force decreases rapidly due to shear damage, the landslide body shows increased cumulative 220 deformation and contributes to further weakening of the weak zone inside the landslide body. A monitoring profile 221 was set up in the middle of the slope in the form of the profile shown in Fig.6, and monitored for 182 consecutive 222 days from June to December 2019. 223 The rainfall-monitoring point is arranged at the leading edge of the slope YL1, rainfall, and deep displacement 224 data monitoring, sampling frequency (triggered acquisition). 182 consecutive days of daily average rainfall 225 monitoring data at point YL1 are shown in Fig.7. 226 There is an obvious continuous precipitation process on the 90th-100th monitoring day, because the surface 227 displacement rate of the landslide correlates well the rainfall, while the deep displacement rate of the landslide has 228 a good correlation with the rate of deep displacement of the landslide has a certain lag with the amount of rainfall, 229 which shows a greater influence on the deformation of the soil at the trailing edge and the central slip zone in the 230 slope of this experiment. The monitoring frequency is 0.5 times/hour, and the monitoring period is 182 days. Figure 8 shows the 236 average daily deformation results of ZK2 monitoring on monitoring days 85-120.From the 36-day continuous 237 observation curve of the central monitoring hole ZK2 (Fig.8), it can be seen that the displacement is basically 238 generated by the 0-5m hole section, the maximum sliding displacement at the mouth of the hole is 16.15mm, the 239 curve forms a more obvious sliding surface at 3m, the sliding displacement above the sliding surface is larger, 240 while the lower displacement is smaller, and the landslide is dominated by shallow overall sliding.
There are more than 10 kinds of acquisition devices commonly used in resistivity imaging technology, and the 242 Wenner device and Wenner-Schlumber device are used as experimental devices in this paper. Among them: 243 Wenner device: AM=MN=NB, A, M, N, B move to the right at the same time point by point, with the increase of 244 pole spacing, the depth through which the profile inversion is interpreted also gradually increases, the electric field 245 distribution of Wenner device is mainly directly below the center of the device, and the sensitivity function 246 becomes horizontal distribution. Wenner device is more sensitive to the vertical change of resistivity, used to detect 247 horizontal target body; Wenner-Schlumberger device-running pole way: this device between Wenner and 248 Schlumberger, the interval layer is 3a (a is the standard pole spacing), in 1-3 layer Schlumberger method running 249 poles, 4-6 layer MN interval becomes 3a, 7-9 layer MN electrode spacing becomes 5a, and so on, to get an inverted 250 trapezoidal cross-sectional map. Its high sensitivity value appears directly below between the measuring electrodes, 251 but the detection depth is small. The slip surface of landslide geological hazards is located within 30m below the 252 surface, and this depth is just within the sensitivity range of Wenner-Schlumber device with pole spacing (a=1m, 253 a=0.5m) (reducing the pole spacing can effectively improve the monitoring accuracy), and it is a more 254 ideal-monitoring device for landslide geological hazards because it takes into account both the horizontal and 255 vertical resolution. 256 Resistivity data collection was performed from the top to the bottom of the slope along the profile direction as 257 shown in Fig.6 with the measurement line , electrode spacing a=4 m, using a Wenner device (which has better 258 sensitivity to lateral structures) for resistivity data collection, the number of measurement electrodes was 60, the 259 supply voltage was 90 V, the maximum supply distance was AB=236 m, the effective measurement depth was 32 260 m, and on the 85th and 120th day of the monitoring process using DEM-3 distributed direct current meter with 261 smart electrodes was used to measure this profile 4 times/day. The Swedish high-density processing software 262 RES2Dinv was applied for topographic correction and data inversion processing, and the resistivity inversion 263 results were obtained as shown in Fig.9 for the 85th monitoring day (before rainfall) and the 120th monitoring day 264 (after continuous rainfall) in Fig.10 . Fig.9, Fig.10, Fig.13, and Fig.14, it can be seen that: ① with the increase of rainfall, the overall 289 structural resistivity value of this slope has a significant decrease, and the data fusion results near the two boreholes ZK1 290 and ZK2, with three kinds of modal data fusion near the surface (0-5 m), the error is less than the deep data fusion results; 291 ② at 6-8 m of ZK1 and 12-14 m of ZK2 are slip surface, the structure of the two parts above and below the slip surface 292 are different, which leads to the obvious difference of discontinuity in resistivity data, and the error of the fusion result 293 reaches 0.36%, which is lower than the fusion result of other depth data of uniform medium, indicating that the fusion 294 algorithm proposed in this paper can effectively monitor the overall deformation and displacement of the slip surface. 295 Fig.11 and Fig.12 show the measured data, fusion results and error analysis of the electrical data before and after the rain 296 in the horizontal fourth layer (depth of 8 m), respectively.
297
As can be seen from Fig.11, the electrical data of the slope as a whole at the resistivity data collection points 3~8 298 and 13~40 at 8 m below the ground surface produced significant changes with rainfall infiltration, and the different water 299 saturation of the rock body led to obvious differences in the electrical data. The results correspond to the main slip 300 surface shown in Fig.6. From Fig.12, it can be seen that the error of the results of the fusion before the rain (-1.7% to 301 4.2%) is significantly larger than that after the rain (0.4% to 2.9%), and the error of data fusion is around 1% near both 302 ZK1 (collection point 10) and ZK2 (collection point 28). Fig.13 and Fig.14 show the 2D inversion effect of the output 303 after updating the resistivity imaging technique data by data fusion.
304
The deep displacement monitoring of the borehole can provide the most direct and effective correction and 305 supplement to the resistivity imaging data, although continuous measurement cannot be achieved in space. From Fig.12 306 and Fig.13, it can be seen that: ① comparing Fig.9 and Fig.12, the results before and after data fusion at the leading edge
Conclusion 320
More than 85% of landslide geological hazards are caused by the dynamic changes of soil seepage field caused by 321 atmospheric precipitation and its resulting deep displacement, so the study of internal deformation evolution mechanism 322 of landslide geological body is the key to landslide monitoring and prediction, and when the modal distribution or 323 characteristics differ greatly, it is difficult to ensure the fusion by only using a linear or nonlinear transformation to 324 compensate for the semantic deviation between multi-modal data for monitoring internal structural changes of landslide 325 body The validity of the results. The depth semantic matching multimodal data fusion algorithm for landslide geology 326 monitoring based on resistivity imaging technology uses the depth semantic matching mechanism of incomplete modal 327 data, explores the depth semantic sharing features of modal data, and establishes multilayer nonlinear correlation among 328 multimodal data by jointly optimizing the fused modal private depth network and the graph regularization-based 329 incomplete modal data learning model, and then obtains the depth semantic matching features of multimodal data. The 330 deep semantic matching features of multimodal data can effectively compensate for the large semantic bias between 331 modalities and obtain more accurate data sharing semantics. In the later stage, by combining multiple surface 332 displacement monitoring data sets, heterogeneous modal data migration fusion with multi-layer semantic matching can 333 obtain the overall three-dimensional dynamic changes of landslide geological body, which provides powerful technical 334 support for landslide geological disaster monitoring and prediction. Figure 1 The arbitrary quadrupole device (Fig.1), with a topographic correction, enables the inversion of 2D pro les of geological bodies with different topography.
Figure 2
In large landslide-monitoring sites, large-area, multi-dimensional resistivity data collection is required. The system controls multiple electrical measurement sub-stations (main functions include: measurement , control the collection sequence and data upload) through the host computer, and each sub-station controls the smart electrodes connected to it (the electrodes internally realize the function conversion between power supply A, B, and measurement M, N), and the collected data are stored in the electrical measurement sub-stations and transferred to the host computer (Fig. 2).
Figure 3
For different monitoring environments, the layout of the host, sub-stations and smart electrodes can be exibly adjusted (Fig.3), is a layout designed for the need of long-distance monitoring of high slopes.
Figure 4
Fig.4a shows a schematic of the dynamic moving electrode grid when scanning the hidden area over a large area. In which, the solid circle on the left side is de ned as the scanning area, and the dashed circle on the right side is de ned as the area to be scanned. When a hidden spot is found, it can be switched to the encrypted scanning mode shown in Fig. 4b. Since the effective depth and accuracy of the inversion of resistivity imaging has been depending on the spacing, the exible electrode grid layout can effectively reduce the pole spacing and improve the accuracy of the complete modal data set based on resistivity imaging data and the reliability of the multidimensional imaging of the internal structure of the fused landslide.
Figure 5
By jointly training and optimizing the modal private depth network and the base matrix, as well as the modal consistent encoding matrix, multimodal depth semantic shared features in the subspace will be obtained. The owchart is shown in Fig.5. The rainfall-monitoring point is arranged at the leading edge of the slope YL1, rainfall, and deep displacement data monitoring, sampling frequency (triggered acquisition). 182 consecutive days of daily average rainfall monitoring data at point YL1 are shown in Fig.7.
Figure 8
The monitoring frequency is 0.5 times/hour, and the monitoring period is 182 days. Figure 8 shows the average daily deformation results of ZK2 monitoring on monitoring days 85-120.From the 36-day continuous observation curve of the central monitoring hole ZK2 (Fig.8), it can be seen that the displacement is basically generated by the 0-5m hole section, the maximum sliding displacement at the mouth of the hole is 16.15mm, the curve forms a more obvious sliding surface at 3m, the sliding displacement above the sliding surface is larger, while the lower displacement is smaller, and the landslide is dominated by shallow overall sliding.
Figure 9
The Swedish high-density processing software RES2Dinv was applied for topographic correction and data inversion processing, and the resistivity inversion results were obtained as shown in Fig.9 Figure 10 for the 85th monitoring day (before rainfall) and the 120th monitoring day (after continuous rainfall) in Fig.10 .
Figure 11
As can be seen from Fig.11, the electrical data of the slope as a whole at the resistivity data collection points 3~8 and 13~40 at 8 m below the ground surface produced signi cant changes with rainfall in ltration, and the different water saturation of the rock body led to obvious differences in the electrical data. Figure 12 show the measured data, fusion results and error analysis of the electrical data before and after the rain in the horizontal fourth layer (depth of 8 m), respectively. Figure 13 the error of data fusion is around 1% near both ZK1 (collection point 10) and ZK2 (collection point 28). show the 2D inversion effect of the output after updating the resistivity imaging technique data by data fusion.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. renamedbb46f.doc | 4,579.4 | 2021-10-08T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Geology"
] |
Development of the Information Security Management System Standard for Public Sector Organisations in Estonia
Standardisation gives us a common understanding or processes to do something in a commonly accepted way. In information security management, it means to achieve the appropriate security level in the context of known and unknown risks. Each government’s goal should be to provide digital services to its citizens with the acceptable level of confidentiality, integrity and availability. This study elicits the EU countries’ requirements for information security management system (ISMS) standards and provides the standards’ comparison requirements. The Estonian case is an example to illustrate the method when choosing or developing the appropriate ISMS standard to public sector organisations.
Introduction
Standardisation aims to optimise the process management, compare defined objects with each other, enable integration and interoperability of systems, cost optimisation and preparedness to adapt to new situations [1]. There are standards designed for information security management systems (ISMS) as well (few examples are [20,21,22]). In private organisation the management decides which ISMS standard to follow based on organisation requirements. At the national level the stakeholders' objectives and the national characteristics (e.g. unique technologies such as the X-tee [2] or electronic identity solutions [3]), and cultural and linguistic peculiarities should be considered independently of each organisation requirements. There is also a need for the standard long-term central maintenance and reduction of administrative costs, or compliance with the regulations (e.g. EU GDPR [5]). At the national level the ISMS standard must ensure a comprehensive national defence and systems interoperability carried out by each organisation. EU regulation (NIS Directive [6]) defines t he c ross-union incident management and information sharing rules, but it does not provide the information security management framework for public sector organisations.
There is no standardised method or requirements on how to compare and show different approaches of the ISMS standards for public sector organisations at the national level. This method should consider the standards substantive comparison, the national security strategic objectives, and external interested parties' requirements or abilities. On the national strategic level this method can support decision makers, and also security specialists to find relevant arguments when choosing or planning to create an ISMS standard. This study aims to investigate what are the requirements to develop information security management standards for public sector organisations at the national level.
The paper is motivated by the development of the national ISMS standard for the Estonian public sector organisations. In this study we identify and structure the requirements for national ISMS using 12 EU national cybersecurity strategies. Then we share the example of how Estonian ISMS requirements can be structured using our study approach. Using the elicited requirements we compare three ISMS standards and illustrate how the assessment of the ISMS standards with the elicited requirements can be done based on the Estonian case. Our experience shows that the comparison of the elicited and sorted requirements and ISMS standards is a possible way that can be followed by the other countries that are looking for ISMS standards or framework for public sector organisations.
The paper is structured as follows: Sect. 1 gives an overview of the Estonian case and related work. Sect. 2 describes the research method. Sect. 3.1 guides the ISMS standards requirements elicitation and structuring and Sect. 3.2 illustrates the use of requirements in comparison of standards and presents the results with the Estonian case. Finally, Sect. 4 concludes the paper with the results and limitations.
Case Description
Estonia is an EU country with 1.33 million inhabitants. Estonia is known for its digital society imago and with the successful response to the first large-scale cyberattack against the entire state [15]. Estonian citizens, e-residents and organisations can use or provide more than 2860 digital services via eGovernment supported Data Exchange Layer X-tee (Estonian instance of the X-Road). More than 150 million requests per month are made via X-tee [16]. Majority of the transactions are made between public sector organisations. This context requires a clear understanding and mutual recognition of information security from the data exchange partners and data processors. The Estonian first version of information security management baseline standard called ISKE was developed and published in 2004 [27]. Now Estonia is developing its new national ISMS standard. In this paper we use Estonian case to illustrate how the elicited requirements for national ISMS can be used.
Related Works
We investigated the studies dealing with requirements to the ISMS standards and standards comparison. European Union Agency for Cybersecurity (ENISA) certification standards review report [12] is indispensable to understand the origin and functioning of standardisation organisations. The report is focused on certification, and provides assessment guidance on the certification schemes, but it does not provide direct input to the comparison of standards.
EU SPARTA project includes the overview of the security-related certification initiatives and the related standards at the national and international level, as one of it's deliverable [10]. It's aim is to inform project partners about available standards that the project partners can consider certifying their project deliverable against. The report does not follow any exact requirement or comparison requirement.
Pertinent collection of security standards are systematised by standardisation bodies authority, jurisdiction, applicability, document type and standards examples in [8]. This overview did not describe the requirements to follow or which characteristics of the standards to compare.
Overviews and summaries of standards can be found from security blogs or websites of the consulting companies. A similar descriptive approach can be found in [9]. The paper covers ISO security-related standards and mentions the Information Security Forum (ISF) Standard of Good Practice for Information Security, COBIT (ISACA framework) and BSI IT-Grundschutz (IT baseline protection). This work only describes the standards, not focusing on the requirements or comparison.
A systematic approach to the content analysis of the standards can be found in [7], where the authors have created the conceptual model for security standards and provide the template for the standards content comparison. Their approach can help organisations, but do not help at the national strategic level.
By standards web-sites, the content comparison is provided for standards compliance confirmation. Usually, there are tables where each row represents similar control of comparable standards [24,28]. These comparisons provide the sentence-by-sentence compliance confirmation on standard contents, but do not deal with other properties of the standard.
Finnish report [11] compares the cybersecurity situation of eight countries on the state level. The report provides a comparison of economical, educational, legal and social aspects of cybersecurity, and names the approaches of these eight countries. The report helped us to consider the relevant areas of the countries cybersecurity strategies.
The Estonian case can be illustrated with studies conducted in 1998 and 2003, which analysed the national security needs and security specialists ability to manage ISMS standards. The studies concluded, that Estonia needs baseline security with granular security measures catalog. [4] The same statements apply in today's Estonia [19]. ENISA report [13] compares 28 EU state cybersecurity strategies and has identified that one common strategic objective is to establish baseline security measures to harmonize the security practices in the public and private sector. Report did not create requirements for that.
Related works showed several approaches on how to compare the security standards and gave some overview of the standards. The related works did not give any suggestions or requirements on how to choose ISMS standards for public sector organisations on the national strategic level. Also, we revealed that national cybersecurity strategies could be an appropriate source of requirements elicitation for ISMS standards.
Research Approach
The research demonstrates the requirements elicitation when developing the ISMS standard, illustrated using the Estonian case presented in Sect.1.1. The paper's goal is to answer the question RQ: what are the requirements to develop information security management standards for public sector organisations on the national level? The research question can be divided into two subquestions: RQ1: how to find and what are the countries requirements to the ISMS standard? RQ2: how to use these requirements when developing the national ISMS standard?
Our research process is case-oriented and is illustrated in Fig. 1. We conducted two parallel processes. Firstly, theoretical approach is used to elicit requirements for ISMS standard (activity 1.1) at the national level. It is based on the National Cybersecurity Index (NCSI) [
Requirements Elicitation
NIS Directive [6] requires the EU member states to create and maintain national cybersecurity strategy and its implementation plan. National cybersecurity strategy is the fundamental source document for acceptable requirements of ISMS standards of the country among other strategic objectives.
For the security standard requirements elicitation we used the NCSI [14] database developed by the eGovernance Academy. eGovernance Academy collects links with publicly available evidence material of each country's cybersecurity documents [14]. We wrote out the ISMS standard's required properties of NCSI TOP 12 EU country's cybersecurity strategy and implementation plan. Then we collected similar requirements under one requirement. We generalised the elicited requirements to cover different countries' needs simultaneously. The requirements pass on the nature of the requirement, not the exact initial wording. Each requirement received the characteristic keywords. Finally, we got 15 requirements. We grouped the elicited requirements into three modules (see Table 1): • National security module determines the national security aspects like compliance with jurisdiction regulations and the national authority right to make or influence to make changes into the content of the standard. This module allows assessing the possible future cost related to adoption and maintaining the ISMS standard. The target group of these requirements are the organisations responsible for ISMS standard development and maintenance on the national level. • Content module helps to get to know the standard usability and adaptability issues related to implementation barriers and complexity. Basic Controls and Levelled Controls help to understand the implementation possibilities depending on the security needs. Technology Dependence and Adaptability with National Needs describe the flexibility of the standard controls. Risk Management Approach shows how risk management is included in the standard or requires separate management. The target group of these requirements are the organisations who have to implement the standard. • Assessment module requires the monitoring and auditing capabilities to assess the organisation's information security. The module characterises the needs and requirements outside the public sector. It is necessary to consider the availability and cost of resources like external certified auditors and audit bodies (target group of the module).
Each requirement received a unique ID (Nx, Cx or Ax) where x is a requirement sequence number and letter corresponds to the module the requirement belongs to. The county code in Table 1 shows the origin owners(s) of the requirement. [18], and new ISMS standard procurement document [19]. These sources take into account the requirements of information security regulations.
Identified Estonian requirements are sorted according to Table 1. The result is given in the Table 3 columns ReqID and Estonian Requirements. Some of the identified Estonian requirements have been collected under same requirement ID, as their final objective is similar (e.g. N4, C1, C2). Also, some are mentioned more than once under several requirements, because they serve several goals (e.g., N2, C8 -one of them requires the possibility to make changes in the standard, the other requires controls approach and flexibility to add national aspects).
ISMS Standards Comparison Example
By following the requirements in Table 1, we compared the three following ISMS standards: • ISO27001 ISO/IEC 27001:2013 Information technology -Security techniques -Information security management systems -Requirements [20], developed by international level standardisation body and recognised globally. • CIS20 CIS Controls v 7.1 [21], developed by industrial body, focuses only on information security. CIS20 provides TOP 20 security measures for organisations. • BSI ITG BSI IT-Grundschutz Kompendium [22], differs from previous standards by its included threats, requirements and security controls catalogues. BSI ITG is known as a baseline security framework which is developed by an EU member state national standardisation body.
Standards content comparison CIS20 has published separate web articles of CIS Mapping and compliance to provide the control-by-control mappings to ISO27001, GDPR, and some industry specific frameworks [24]. BSI has published the analysis of BSI Standards and Kompendium compliance from the ISO27001 perspective [28]. These compliance confirmation publications assert that through ISO27001 perspective, three comparable standards contents cover the same security areas and are compliant to each other's security objectives.
Standards comparison based on elicited requirements ISMS standards comparison results are presented in Table 2. The table gives a one-page overview of the similarities and differences of standards.
As the standard-setting similarities, we point out that the ISO27001 requirements and security objectives are reflected in other standards (C2). The standards are, thus, consistent with the security areas content. All three standards are intended to be used by a wide user community and do not impose restrictions to organisations by size, sectorality or industry field (C1). The introduction of risk management is required by all standards (C5). For all standards, there is one basic document supported by additional documents. It must be taken into account that the implementer must have all documents available (to take into account the cost to translation, maintenance, license fees) (C8, N4, N3, N5). None of the standard imposes restrictions on technologies directly (C6). An auditing and certification approach based on ISO27001 is suitable for all standards (A1, A2).
When deciding about standards, however, differences between standards become critical. For example, the chosen standards are part of different legal jurisdictions (N1) and there are also different funding schemes (at the moment: global, US, EU) (N2). Often, just through financing, it is possible to influence the content of the standards. This is important for national security considerations. The financing schemes of those three given standards differ by financier (national bodies, donations or state government) (N2). From the public sector's perspective, it could be a problem if the standard has a license fee (ISO27001) and is not freely available (N3). To assess the standard dynamics or statics we can compare the update cycle Contributors: US agencies, commercial partners. Financing: donations, grants, paid programs, product sales [26] Publicly reviewed contributions. Financing: German Gov.
N3
User based fee (also to translated versions) Organisations have different security needs and they are looking for matching security levels to optimise the security cost. So the organisations with lower security needs do not have to implement all the high-level measures. CIS20 and BSI ITG provide leveled approach (C3, C4). The volume of the guidance material can drive the usability of the standard (C8).
If ISO27001 and CIS20 are technology-free, then BSI ITG offers security measures suitable for the most common technologies (C6). Everyone can propose suitable profiles for the BSI, and if there exists a general approval, they will be integrated within a year into the composition of standard catalogues (C7, N5).
To summarize our comparison, the decision-maker should understand the differences and similarities of the standards, consider separately national security aspects (first module) and standards' content aspect (second module), and to weigh, how the auditing and certification schemes (third module) could work, and which resources are needed.
Estonian case standards assessment. From the perspective of the Estonian ISMS standard development it is important to compare the Estonian requirements with ISMS standards. In Table 3 we align the Estonian ISMS standard requirements with the compliance assessment to the three previously described ISMS standards (see Table 2). The qualitative sequence method has been used for the assessment: the most suitable standard in compliance with concrete Estonian requirement(s) is marked as "++", suitable with some exclusions is marked as "v" and not suitable is marked as "0", N/A is marked as "-". We used the assessment mark "+" for interim cases of "++" and "v". The result shows the differences between the standards in the National security module in Table 3. In the Content module the BSI ITG stands out with its positive results. In Estonian case the Assessment Module probably could not influence too much the decision making. The case shows that for Estonian public sector organisations, the most suitable standard to use is BSI ITG based standard.
Limitation and Conclusion
We investigated the national cybersecurity strategies and their implementation plans for requirements elicitation in their original languages using the Google Translate application (when needed). We avoided the progressing of the errors caused by machine translation by including the requirement in case ambiguity only if it appeared in both sources.
Second aspect to mention is that the national cybersecurity strategies are written in different detail and maturity levels. For example, the Greek documents covered 14 requirements out of 15, while we found only one requirement for the French public sector security. In order to bring the elicited requirements to the same maturity level, we ruled out very specific requirements for security measures and generalised them under Requirement ID C7. Also, the requirements are not with equal importance to national states. We suggest to assess them in the context of national objectives.
In the study, we elicited the ISMS requirements for public sector organisations in a form that supports reuse of the structured requirements. We used the structure of elicited requirements to compare three ISMS standards. In the example of the Estonian case, we showed how to compare requirements and standards. The result could be useful for small national states which wish to use the experiences and existing ISMS standards of other countries to develop their information security measures.
During the study, we perceived that all EU countries are simultaneously developing their standards or frameworks. Our working group came to the same conclusion with the ENISA report [13]. Hence the ENISA or other EU organisation could develop a central framework or baseline for public sector organisations security management, and each country could adapt Table 3. ISO27001, CIS20 and BSI ITG standards assessment based on requirements to Estonian ISMS standard (Notation: "++" -most suitable; "v" -suitable with some exclusions; "0" -not suitable; "-" -N/A; "+" -interim cases of "++" and "v")
Req ID
Estonian Requirements ISO27001 CIS20 BSI ITG National security module
N1
Standard should enable the baseline security to fulfil requirements of national and international regulations like GDPR, NIS-directive, etc. [17].
v 0 + N2 Standard should be flexible enough to add national content, measures or modules [19].
v v +
N3
Standard should be available free of charge [19]. 0 + ++ N4 The standards must transfer Estonian language and culture, i.e. be in correct language, terminologically validated and compiled for Estonians [17]. Correct language and consistent terminology should be used and validated [19].
C1
Information security should be integrated widely in all type of organisations and their processes [17]. Standard should be extendable for all public administration and industry organisations [17]. Standard should support public sector business processes [19].
C2
Standard should be based on an European or internationally recognised standards and practices [17,6]. In case of a translation adoption, the standard should retain the connections with original document sets [19].
C3
Standard should help optimising risk management by providing predefined measures for typical solutions [19].
0 v ++ C4 Implementation process should enable levels of implementations -the base implementation and advanced levels based on security requirements [19].
0 + ++ C5 Standard should use and adopt risk based approach for information and network security management [17].
C6
All technologies should be given equal opportunities regardless of the platform [17].
The obligation to use Estonian based technological solutions. Therefore, the standards must enable and propagate the use of X-tee and Estonian public key infrastructure (PKI) solutions. [19] + + v C8 Standard should be flexible enough to add national content, measures or modules [19]. 0 0 + | 4,550 | 2021-07-02T00:00:00.000 | [
"Computer Science",
"Political Science",
"Business"
] |
Application of machine learning for fleet-based condition monitoring of ball screw drives in machine tools
Ball screws are frequently used as drive elements in the feed axes of machine tools. The failure of ball screw drives is associated with high downtimes and costs for manufacturing companies, which harm competitiveness. Data-based monitoring approaches derive the ball screw condition based on sensor data in cases where no knowledge is available to derive a physical model-based approach. An essential criterion for selecting the condition assessment method is the availability of fault data. In the literature, fault patterns are often artificially created in an experimental test bench scenario. This paper presents ball screw drive monitoring approaches for machine tool fleets based on machine learning. First, the potentials of automated machine learning for supervised anomaly detection are investigated. It is shown that the AutoML tool Auto-Sklearn achieves a higher monitoring quality compared to literature approaches. However, fault data are often not available. Therefore, unified outlier scores are applied in a semi-supervised anomaly detection mode. The unified outlier score approach outperforms threshold-based approaches commonly used in industry. The considered data set originates from a machine tool fleet used in series production in the automotive industry collected over 8 months. Within the observation period, multiple ball screw failures are observed so that sensor data about the transient phases between normal and fault conditions is available.
Need for condition monitoring of ball screw drives in machine tools
Machine tool feed drives are used for high-precision positioning of the milling tool and workpiece. Ball screw drives are suitable for this task due to their high-efficiency level [1,2]. Ball screws also exhibit low heating and length variation and high positioning accuracy [3]. Ball screws also have a low failure frequency. However, in case of failure, high downtime follows, reducing machine tools' technical availability. A total of 38% of the downtimes of feed axes are caused by ball screws and feed axes, accounting for nearly 40% of the leading causes of machine tool failure [3]. A ball screw drive consists of multiple components, including a raceway, ball screw, screw nut, drive motor, support bearings, and the table. The ball screw is subjected to preloading to increase rigidity [4]. Various types of ball screw damage exist. In the case of sudden early damage, running instability occurs due to damage sustained by the deflection elements resulting in defects of balls and the raceways. Gradual late damage occurs in ball screws used for longer than the intended operating time. In this case, pitting is created in the raceway and ball surfaces, leading to running irregularities. Another type of damage is the insidious loss of preload. Over time, the ball diameter decreases, reducing the preload and, thus, the stiffness properties of the drive. The stiffness variations increase the chatter tendency of the axis, and thereby surface tolerances of workpieces can no longer be maintained [5]. Additionally, ball screws exhibit higher wear than linear drives due to their higher friction component [2]. If the wear exceeds 80%, the ball screw is irreparable and must be replaced. If a ball screw is repaired in time, 30-50% of the replacement costs can be saved [6]. Due to the diversity of wear and fluctuating operating parameters (temperature, load, lubrication, etc.), predicting the operating time of ball screws is difficult [2]. Condition monitoring is used to reduce downtimes and high replacement costs of machine components and thus increase the availability of machine tools [7]. In addition, condition monitoring can assist in optimizing maintenance activities [4]. Condition monitoring approaches are divided into model-based and data-based approaches. Model-based approaches include physical models, and classical AI approaches like expert systems. Physical models comprise approaches based on parameter estimates, which use estimation methods and differential equations to determine the model parameters. Data-based approaches learn the system behavior automatically based on past data. This group includes machine learning methods such as artificial neural networks used as classifiers. In addition, machine learning methods are used to output an outlier score if fault data is unavailable (semi-supervised anomaly detection) [8]. In contrast to thresholdbased approaches (also called limit-value based) which allow fault detection, machine learning methods can be used for fault diagnosis. This requires that information about different types of faults is available [9].
Our contribution
This work presents a ball screw drive monitoring approach for machine tool fleets based on machine learning. An industrial data set of a machine tool fleet (monitoring data of 13 five-axis machine tools MAG SPECHT 600 collected over 8 months) used in series production in the automotive industry is considered. Within the monitoring period under consideration, the ball screw drives of the Z-axis are replaced on 4 machines. The distinctive feature of the data set is that information about the transition between normal and faulty conditions is apparent in three ball screw drives. In the literature, anomalies are often artificially generated in an experimental test bench scenario. There is usually no data available that (a) describes the entire life cycle of the ball screws in industrial practice and (b) describes the transition phase between normal and faulty conditions. These approaches also neglect the fact that the normal state of the machines changes over time. For this reason, an in-depth analysis of the monitoring signals in the normal and faulty condition of ball screws of 13 five-axis machine tools MAG SPECHT 600 is performed.
In the past, many researchers used machine learning classifiers for condition monitoring of ball screw drives [10][11][12][13][14][15][16]. This approach can be followed when fault data is available (supervised anomaly detection). These studies arbitrarily select the methods at the respective stages of data and feature preprocessing, dimensionality reduction, and classification. Often, it is not shown to what extent the model hyperparameters, e.g., how to configure the method, are optimized. In this context, automated machine learning (AutoML) offers the possibility to systematically support the practical user in selecting methods at the respective stages [17]. In addition, past studies have shown that AutoML tools like Auto-Sklearn achieve better classification results on average through ensemble building and meta-learning [18]. However, the potential for performance improvements of Auto-ML tools for ball screw condition monitoring has not been investigated to date. In this paper, a methodology for supervised anomaly detection using Auto-Sklearn is developed for ball screw drive monitoring in machine tool fleets. The proposed method is able to detect fault states of ball screw drives, and because of the generality of AutoML, it is not restricted to the machine types monitored in this paper.
Supervised anomaly detection methods are only applicable when sufficient fault data is available. For this reason, a semi-supervised anomaly detection approach is applied and evaluated. A so-called baseline model is created based on data describing the normal state of ball screws. The baseline model produces a unified outlier score to perform condition assessment. The monitoring quality of the unified outlier score approach outperforms threshold-based approaches commonly used in industry.
The paper is organized as follows: Chapter 2 presents the related work in machine learning based ball screw drive condition monitoring. The data set is described in Chapter 3. In Chapter 4, the monitoring methodologies are introduced. The results of the experimental study are presented in Chapter 5.
Related work on monitoring approaches of ball screw drives based on machine learning
Usually, machine axes are evaluated via a test cycle executed intermittently during the manufacturing process. To ensure robust monitoring, the influence of any sources of interference must be avoided. One source of interference is the manufacturing process. During the process, process parameters and the workpiece mass change within metal-cutting manufacturing processes. Consequently, the monitoring signals change regardless of the ball screw drive condition. For this reason, the monitoring signals are recorded during the process-free time in a predefined test cycle [2]. Anomalies are often artificially generated in recent studies to evaluate monitoring approaches. Jin et al. and Denkena et al. use different ball sizes to simulate the preload loss [10,11]. Emilia et al. induce defects on the running surface of the ball screw with the laser powder cladding method [12]. Feng and Pan use a double-nut system to vary the preload [13]. Benker et al. use two ball screws with different levels of preload [14]. Balaban et al. block the return channel with a detached piece of insulation. Additionally, the backlash is simulated using undersized balls and spalling defects on the ball screw are generated using electro-discharge machining [15]. Li et al. use different wear levels of ball screws acquired from an industrial partner [16]. An overview of the faults considered as well as the internal and external sensors used, is given by Butler et al. [4].
To detect anomalies, a distinction is made between two different procedures in condition monitoring: In the context of semi-supervised anomaly detection, it is assumed that only data describing the normal state is available [19]. For example, control charts based on T 2 and Q-statistics, as well as the Mahalanobis-distance, have already been used for ball screw monitoring [20,21]. In contrast, supervised anomaly detection uses fault classes in conjunction with a classifier that distinguishes between normal and fault states [19]. Table 1 gives an overview of supervised anomaly detection approaches of ball screw drives. Jin et al. apply various methods such as Gaussian Mixture Models, Self-Organizing-Maps, and the Mahalanobisdistance in a supervised mode for ball screw monitoring based on vibration and temperature data. The presented methods output a health index based on extracted features to evaluate the machine component's health. The authors show that the health indices correlate with the anomalies such as lack of lubrication and preload loss. Suitable features for classification are identified using the Fisher-score [10]. Benker et al. use Gaussian Process Classification to classify different preload levels [11,14]. Li et al. employ a support vector machine to classify the condition of ball screws. Sensor data from the machine control, such as torque, and data from three accelerometers are given. Relevant features are preselected in the first step using the Fisher-score. Furthermore, only a small subset of the preselected features is used for classification by sequential forward selection. The authors show that torque is more suitable for classifying the ball screw condition than vibration signals [16]. Feng and Pan develop a low-cost sensor system to collect temperature and vibration data for ball screw monitoring. Support Vector Machines are applied to classify different preload levels [13]. Emilia et al. present an approach for ball screw monitoring based on vibration and acoustic emission data. A Naive-Bayes classifier and a K-Nearest Neighbor classifier are employed to classify different states. The authors obtained improved results using vibration data compared to acoustic emission data [12]. Denkena et al. use the F-score and the principal component analysis (PCA) for feature selection and feature extraction. It is shown that the position error is more suitable for the classification of different preload levels than the acceleration signal data [11]. Schmidt et al. performed condition monitoring using a so-called ball-bar measurement. This method is used to determine the positioning accuracy of the machine tool. In total, data from 32 ball screws, including 145 measurements, are used. A K-Nearest Neighbor model is applied for classification. However, the data set is not described in detail [22]. Other authors use deep learning methods for ball screw monitoring, such as convolutional neural networks [23][24][25][26]. In the literature, there is often no comparison with "simpler" classifiers when deep learning methods are applied.
As described earlier, there is no systemic nature to the previously described work on supervised anomaly detection concerning method selection. It is rarely described why a specific method is selected for data and feature preprocessing and classification. Therefore, using an AutoML tool to create the model pipeline to predict ball screw conditions is a systematic and replicable approach. AutoML tools are increasingly being applied in the manufacturing context. For example, ML-Plan-RUL, presented by Tornede et al., allows for predicting machines' Feature selection (Fisher-Score) Self-organizing map, Gaussian mixture models, Mahalanobis distance [11] Hold out (train, test) None Normalization Decision Tree [12] Hold out (train, test) None None Naïve Bayes, K-Nearest-Neighbors [13] Hold out (train, test) 2 Kernels of SVC None Support Vector Machine [14] None Maximum likelihood None Gaussian Process Classification [15] Hold out (train, test) None Feature scaling (Standardization) Feed Forward Neural Network [16] Cross validation None Feature scaling (Standardization), feature selection (Fisher-Score, Sequential Forward Selection) Support Vector Machine remaining useful life (RUL) for predictive maintenance [27]. For predicting the shape error for pocket milling operations in process planning, Denkena et al. use Auto-Sklearn [28]. Auto-Sklearn is also used by Kißkalt et al. to predict tool wear during lot milling [29]. In contrast to literature approaches, data from a machine tool fleet are available in this work. Fleet-based condition monitoring assumes data from several identical machines or machine components are available. This increases the probability that failures of machine components occur in an observation period and thus that fault data are available. In addition, the question arises as to whether monitoring can be improved using data from other machines. Fleet-based monitoring approaches can be found in the literature focusing on specific machine components. For instance, Hendrickx et al. [30] develop a clustering-based condition monitoring approach for electrical drivetrain fleets. However, literature on ball screw drive monitoring in machine tool fleets that include long-term datasets is missing.
Data set description and analysis
The data set is collected from 13 five-axis machine tools of the type MAG Specht 600, recorded over more than 8 months. These machines are used in the automotive industry, where the Z-axis is heavily stressed. After the production of a lot, an identical test cycle of the Z-axis is performed. The machine's axis kinematics and the Z-axis's torque in normal condition from a test cycle are shown in Fig. 1. For each machine, the Z-axis torque M BSD is recorded at a sampling frequency of 100Hz via the machine control. In addition, the data from a 3-axis acceleration sensor Acc 1−3 from Marposs Monitoring Solution GmbH (Artis) is recorded, which is attached to the machine bed. Another acceleration sensor Spi is mounted on the tool spindle. The acceleration sensor Spi is originally installed for spindle monitoring. The acceleration sensors are connected to an industrial PC which stores the signal data for each test cycle. The industrial PC accesses the machine control data via Profibus. In addition, the test cycle data can be visualized via a control panel at the machine. The measuring setup is depicted in Fig. 2.
The sensor data is available as discrete time series Table 2 shows the number of test cycles with and without anomalies of the respective ball screws during the observation period. The numbering of the ball screws corresponds to the respective machine tool in which the ball screw is installed. A total of 1540 test cycles are performed for 13 identical machines. For a total of 4 machines, a ball screw drive is replaced during the observation period. The ball screw drives are replaced due to tolerance deviations concerning the manufactured products. The ball screws used before disassembly are marked "pre" in the table.
The first step involves analyzing the fault patterns of the monitoring signals in the time and frequency domains that occur before ball screw disassembly. In the case of three ball screws, test cycles are available that describe the transition between normal and faulty conditions (Bs7-pre, Bs11-pre, Bs13-pre). In the case of ball screw Bs12-pre, it is noted that an advanced state of degradation is already present at the beginning of data acquisition. For ball screws Bs-11-pre and Bs-12-pre, damage to the raceways is detected after disassembly. In contrast, wornout balls have been the root cause of failure in the case of ball screw Bs13-pre. No condition changes are detected for the newly replaced ball screws (Bs7-post, Bs11-post, Bs12-post, Bs13-post). Figure 3 illustrates the segmented torque of the Z-axis M BSD and the accelerometer signals ( Acc 1−3 , Spi ) for different degradation levels. The time series are segmented in such a way that only the segments in the forward direction with constant feed are considered. These fixed segments are selected based on expert knowledge. In the case of ball screw Bs7-pre, no significant changes in the torque signal M BSD are observed after the anomaly starts. In contrast to the observations of Lia et al. [16], this means that the internal control sensor signals are not sufficient for robust monitoring of ball screw drives in machine tools. In the case of ball screws Bs11-pre, Bs12-pre, and Bs13-pre, higher frequencies occur in thee torque signal M BSD at the start of the anomaly. For each faulty ball screw, changes in the accelerometer signals are visible when the abnormality occurs. In the case of ball screw Bs14-pre, signal Acc 2 is shown since no significant changes are visible in signal Acc 1 . Therefore, it is concluded that the acceleration signals of the triaxial accelerometer should be evaluated in each direction. In the case of ball screw Bs11-pre, more significant peaks initially appear in the acceleration signal Acc 1 at the beginning of the abnormality. This is also observed in the signal of the acceleration sensor Spi of the spindle. As wear progresses, new signal plateaus are formed in all cases after several test cycles. These signal plateaus initially form for specific value ranges and increase in size over time. Figure 4 depicts the frequency spectra of different ball screw conditions of the torque signal. For this purpose, the signals are transformed using a fast Fourier transform (FFT). In the case of the ball screws Bs11-pre and Bs13-pre, it can be seen that peaks occur in similar areas at the beginning of the anomaly. It is noted that in addition to the amplitude, the signal's frequency also changes as wear progresses. The frequency changes may be due to the fact that the damage to the ball raceways gets wider and thus the excitations change. Changes in the frequency range of the accelerometer is only observed in the case of the ball screw Bs11-pre.
However, the monitoring signals vary in the normal state. Recent studies have shown that monitoring signals change due to factors such as temperature, axis position, and ball screw exchanges regardless of the ball screw conditions [31]. Other reasons could be different lubrication and preload states. In addition, a slight tilting of the machine axes and adapted controller settings could also cause different signal trajectories. Figure 5 illustrates the distributions of the segmented time series of the torque as well as the acceleration sensors in the normal state. The acceleration sensor Acc 1−3 takes the value 0 for some test cycles in the case of ball screw Bs2, which indicates incorrect data acquisition. It is observed that the value range of the acceleration signal Spi is significantly larger than the signals of the triaxial acceleration sensor Acc 1−3 . For those machines without a ball screw disassembly, the sensor values' ranges are very similar. However, the distributions of the torque take different shapes in distribution. It is observed that the value range of newly assembled ball screws, like torque M BSD and the acceleration signals ( Acc 1−3 , Spi ) is significantly larger. This is due to the running-in processes of newly installed ball screws. Figure 6 presents the trajectories of the first 5 segmented torque signals after assembly. In the case of ball screw Bs7-post, Bs12-post, and Bs13-post, there are apparent differences in signal level and progression.
In addition to the signal changes in the running-in process, other signal patterns occur independently of condition changes of the ball screws. In the case of ball screw Bs6, higher frequencies occur in the torque of the forward motion without any replacement being documented. For ball screws Bs5, Bs9, and Bs10, higher frequencies are visible in the torque in the backward movement of the test cycle. In the case of torque, random level changes occur between test cycles in the normal condition. In addition, a gradual level shift is visible for the entire observation period for the torque and acceleration signals. In the case of acceleration signals, random peaks occur at irregular intervals in the normal state. As a result, robust monitoring strategies are needed to prevent false alarms.
Supervised anomaly detection approach using automated machine learning
In the first step, machine learning is used for supervised anomaly detection of ball screw drives assuming that fault data is available. AutoML methods are used for decision support for model selection. In short, AutoML refers to methods for the optimization, automation, and analysis of design decisions regarding the complete machine learning (ML) pipeline to obtain a model with peak performance. The ML pipeline comprises data preprocessing, feature selection, model selection, and the optimization of their hyperparameters, as well as postprocessing of the results. The challenge involves determining a suitable solution within a computational budget in this large search space. Numerous approaches have been developed in the past to solve this problem [32][33][34][35][36][37][38][39][40][41]. These approaches allow domain experts without ML expertise to easily use ML methods in practice [18,32]. Thornton et al. introduce Auto-WEKA to select models and optimize their hyperparameters for classification problems simultaneously. They treat the choice of the model as another hyperparameter and use sequential modelbased algorithm configuration (SMAC) [42,43] as their solver. SMAC is an iterative, global optimizer based on Bayesian optimization. In Bayesian Optimization, the true objective function which should be optimized is approximated by a surrogate model. This makes it very sample-efficient and requires only few function interactions which is especially useful if the function evaluation is costly or time-consuming [44]. Extensions of Auto-WEKA allow the selection of a model and its hyperparameters for regression and clustering tasks. The developed approach also enables the evaluation of features using filtering methods. The authors show that Auto-Weka can achieve better results than grid search or random search for model and hyperparameter selection [32]. A more recent approach inspired by Auto-WEKA is Auto-Sklearn, which can be used for regression and classification problems. Auto-Sklearn also uses SMAC as the optimizer. It further allows data preprocessing, e.g., the imputation of incomplete data, feature scaling, and Bs2 Bs10 Bs11-pre Bs12-pre ball screw number dimension reduction (e.g., PCA). In contrast to Auto-WEKA, Auto-Sklearn has two additional components. Meta-learning is utilized for finding good instantiations of Auto-Sklearn based on already-seen data sets. For this purpose, in an offline phase, data sets of the OpenML [45] database are described using meta-features. In the next step, optimal configurations for these data sets are determined by SMAC. A new data set is assigned to a group of similar data sets in the OpenML database using the meta-features. This enables quick access to precomputed optimal configurations stored in the database saving computational costs on the user's side. The second innovation enables the construction of ensembles with good prediction quality, allowing for more robust predictions. The authors showed that the prediction quality of Auto-Sklearn can outperform the results of other Auto-ML approaches for several data sets of the OpenML repository [18]. The recently released version Auto-Sklearn 2.0 provides a new meta-learning technique for improved handling of iterative algorithms.
Besides Auto-Sklearn and Auto-WEKA, other AutoML approaches such as hyperopt-sklearn, TPOT, TuPAQ, ATM, Automatic Frankensteining, ML-Plan, Autostacker, AlphaD3M, Collaborative Filtering, and Auto-Keras have also been published [17,46]. An overview of different AutoML approaches and their features is given in Waring et al. [46]. Besides approaches from academia, there are numerous commercial approaches to AutoML, such as Rapidminer, Microsoft Azure Machine Learning, Google's Prediction API, Amazon Machine Learning, etc. [35]. In this study, Auto-Sklearn is used for supervised anomaly detection of ball screw drives in machine tool fleets.
The overall workflow with Auto-Sklearn is depicted in Fig. 7. Segments of the time series are usually selected to increase the monitoring quality. In addition to the time series of the test cycles, the labels for each time series are also available (see Table 2). In this work, a distinction is made between normal and faulty conditions (fault detection). For condition monitoring after data acquisition is extracting features from the time series because the supervised learning methods in Auto-Sklearn require a fixed length input. However, it is not possible to determine in advance which signal features are best suited for the respective monitoring application. For this reason, a high quantity of features needs to be generated from the data to obtain a few useful features [47]. This highlights the need for an automatic selection of the data and feature processing. More than 700 time series features are generated for each sensor using the tsfresh [48] python library to determine the condition of the ball screw drive. The default hyperparameters of the signal feature generation methods contained in tsfresh are applied. It should be noted that feature engineering and selection is essential for the monitoring quality. The generated features serve as input for Auto-Sklearn to determine the condition of the ball screws. Each pipeline constructed by Auto-Sklearn consists of up to three data preprocessors, one feature preprocessor and one classifier plus their respective hyperparameters. The search space for the ML pipeline is hierarchically organized as a tree and contains continuous, categorical and conditional hyperparameters. Auto-Sklearn can select from 16 classifiers, 19 feature preprocessing methods, and numerous data preprocessing methods for the classification task. In total, there are more than 150 hyperparameters [17]. The data preprocessing can include feature scaling, imputation of missing values, one-hot encoding, and/or balancing of target classes. Examples of feature preprocessing are PCA and ICA. Available classifiers are Adaboost, Naive Bayes, Decision Tree, Extra Trees, Gaussian Naive Bayes, Gradient Boosting, K-Nearest Neighbor, Linear Discriminant Analysis, Linear Support Vector Machine (SVM), Non-Linear SVM, Multi-layer Perceptron, Multinomial Naive Bayes, Passive Aggressive, Quadratic Discriminant Analysis, Random Forest, and Stochastic Gradient Descent. In addition, Auto-Sklearn builds ensembles for robust predictions. The idea behind ensemble building is based on the fact that classifiers have different advantages and disadvantages on different data sets that complement each other.
In contrast to many literature approaches, data from several machine tools are available in this work. Figure 5 illustrates that the data distribution in the normal state of ball screws differs from machine to machine. In addition, signal characteristics change over time without any defect of the ball screws being present. This raises the question of the generalizability or applicability of the ML-pipeline to new ball screws and the robustness against false alarms. For this reason, Chapter 5 evaluates different strategies for applying the presented approach to new and unseen data.
Computation of unified outlier scores using machine learning
In supervised anomaly detection, a labeled data set containing fault data is assumed to be available. If only insufficient fault data is available to train a classifier, semi-supervised anomaly detection approaches can be considered. A so-called baseline model is trained based on data describing the normal state. An outlier score is produced which varies in case of condition changes. In this work, the approach of Denkena et al. [49] is used and adapted for anomaly detection of ball screw drives in machine tool fleets. Thereby, methods for unsupervised anomaly detection are used for semi-supervised anomaly detection. According to Kriegel et al. [50], the approach for calculating uniform outlier scores is employed. Using the uniform outlier scores, the scores of several outlier score methods can be combined into an ensemble. Moreover, scores from multiple sensors can be aggregated for robust monitoring. In contrast to the work of Denkena et al. [49], data from multiple machine tools are considered. In addition, different scaling strategies are applied.
In the first step, feature groups are extracted based on the segmented signals. In contrast to the supervised approach, only simple signal features are considered. This is due to the fact that no fault data is available for model training. Table 3 provides an overview of the feature groups used.
The first group consists of the general-purpose features in the time domain, which are adopted from the study of Denkena et al. [49]. Another feature group uses information on the sample autocovariance. The autocovariance indicates how similar a time series x i−l shifted by l discrete time steps is to the original time series x i . According to Eq. (1), the sample autocovariance is calculated as follows [38,51]: The sample autocovariance is calculated for l ∈ {0, … , 9} . Features are also extracted from the frequency domain by transforming the raw data of all signals using an FFT. The amplitude and frequency of the five most dominant peaks between 10 and 50 Hz are used as another set of features. The sciPy library is used to calculate the features from the time domain [52]. Additionally, the statsmodels library is applied to compute the sample autocovariance [51].
In the next step, an outlier score calculation method is selected. Various methods exist for unsupervised anomaly detection that makes different assumptions about the data and the occurrence of anomalies. In this work, the K-Nearest Neighbor (KNN) method is used to evaluate the ball screw condition based on the extracted features of the test cycles. This method is characterized by a small number of hyperparameters and makes no assumptions about the data distribution or signal features. Using the KNN method, an anomaly score S(o) is calculated for a new observation o ∈ O . Thereby, according to Eq. (2), the distance of a new observation o ∈ O to its nearest neighbor i ∈ N k (o) is used as an anomaly score [53]: For this purpose, a distance metric d needs to be selected. An observation o is the standardized feature vector extracted from the time series of test cycles c {1, … , n b } . Additionally, the outlier score is scaled using the approach of Kriegel et al. [50]. The scaling of the outlier score allows the calculation of decision boundaries and the construction of robust ensembles. According to Kriegel et al. [50], the outlier score is scaled to be regular and normal. An anomaly score S is regular if The linear scaling assumes an equal distribution of the regularized outlier scores. It should be noted that the optimal choice of the correct distribution depends, for example, on the method chosen to calculate the outlier scores. In this work, the Gaussian scaling as well as the Gamma scaling are applied. The Gaussian scaling contains only two adjustable parameters (mean and standard deviation). According to Eq. (6), the Gaussian scaling is calculated: Before normalization, the mean
Reg train S and standard deviation
Reg train S of the regularized outlier scores of the training set are determined. The Gaussian error function (erf ) is also employed. Kriegel et al. [50] note that low-dimensional KNN-scores are more likely to reflect a gamma distribution. To perform gamma scaling, the cumulative density function is calculated according to Eq. . After calculating the normalized anomaly scores, an aggregated score considering OD j ∈ OD outlier scores of the ensemble is calculated according to Eq. (9): In this work, the scores of the accelerometer signals Acc 1−3 are aggregated into an ensemble to minimize the number of false alarms. To decide whether a new observation o ∈ O test represents an anomaly, Eq. (10) is evaluated: An alarm is issued in case of S final (o) = 1 . The risk factor allows us to adjust the sensitivity of the monitoring system.
Signal threshold-based approaches
In addition to machine learning, signal threshold-based approaches have been used for monitoring in the literature [9]. For example, fixed limits and tolerance bands are designed for process monitoring in machining to detect various anomalies such as collisions, overload situations of jammed tools, or tool breakage [54]. Two signal thresholdbased approaches for semi-supervised anomaly detection are evaluated in this work. The first approach proceeds in such a way that certain signal features sf c fixed limits are calculated based on safety factor : The safety factor typically takes values of 1.1 or 1.2. An alarm is triggered if a signal feature sf c for c ∈ C test is greater than the limit value GP_up.
In another approach, tolerance bands, according to Brinkhaus [54], are used for monitoring. In the first step, upper and lower envelopes [h_up c (i), h_lo c (i)] around x c (i) are formed according to Eqs. (12) and (13): It is assumed that the upper and lower envelopes follow a normal distribution. The parameter represents the shift factor of the time series. Based on the determined envelopes, an upper and a lower limit value are determined according to the Eqs. (14) and (15): The mean values − h_up (i) and − h_lo (i) and the standard deviations s h_up(i) and s[h_lo(i)] of the envelopes are used to calculate the tolerance bands. The safety factor is adjusted to set the distance between the decision boundaries and the mean values of the envelopes. In the work of Brinkhaus [54], time series are weighted differently depending on their occurrence. Thus, the mean values and standard deviations of the envelopes are calculated based on the Eqs. (16) and (17) as a function of the memory parameter a: For larger values for the memory parameter a , the weight of past time series for calculating the mean and standard deviation of the envelopes is reduced and vice versa.
Supervised anomaly detection
In the first step, the supervised anomaly detection approach presented in Chapter 4.1 is applied for fleet-based condition monitoring of ball screw drives in machine tools. In an experimental study, the prediction quality of different machine learning methods used in the literature (see Table 1) is compared to Auto-Sklearn. Auto-Sklearn 2.0 [17] (version 0.12.6) is used in the experiments. The data from all machines are combined into one set, and the time series are randomly shuffled. After feature generation using tsfresh, Auto-Sklearn is applied to perform fault detection. During optimization, fivefold cross-validation is performed in the inner training loop. Auto-Sklearn is compared to baseline methods used in literature with default hyperparameters. All baseline methods use the standard scaler as feature preprocessing (removing the mean and scaling to unit variance). Baseline methods are SVM, Decision Tree (DT), Gaussian Process Classifier (GP), K-Nearest Neighbor (KNN), Multilayer Perceptron (MLP), and Gaussian Naïve Bayes (GNB). In addition, methods for dimension reduction (feature extraction and feature selection) of the literature approaches are adopted. In this setting, Auto-Sklearn selects one single classifier for predictions. All experiments are performed on Intel Core i9-9900KF CPUs with 3.6 GHz and 32 GB RAM. A time budget of 1500 s is defined for Auto-Sklearn (fivefold inner cross-validation).
The predictions of binary classifiers can be evaluated using various metrics. Table 4 shows a confusion matrix for predictions of binary classifiers. The so-called false positives represent the number of false alarms issued by the monitoring system. The false negatives represent the number of anomalies not detected by the monitoring system. Combined with the number of test cycles correctly detected as anomalies, the values for Precision and Recall are calculated according to Eqs. (18) and (19). Based on these values, the f1-metric is calculated according to Eq. (20). The proposed monitoring approach and the baselines are evaluated on an outer fivefold cross-validation for 5 different random seeds using f1-metric. The f1-metric is applied because the data set is unbalanced by the fewer number of faulty test cycles.
Thereby, a perfect classifier achieves an f1-score of 1. It should be noted that this evaluation procedure is used for model comparison. In practice, Auto-Sklearn only needs to be run once using inner cross-validation on the whole data set. Table 5 shows the results for the case of non-segmented time series. The highest classification accuracy of the baseline approaches is achieved by the MLP classifier (f1-score of 0.9059). For GP, the f1-score with baseline settings is 0.0000. This is due to the fact that GP finds no true positives. Auto-Sklearn achieves the highest f1-score of 0.9509. A further step involves segmenting the time series. Thereby, only the segment of the time series in which the ball screw moves in the forward direction is considered, i.e., t [SB, SE] . Thereby, SB and SE represent the start and the end of the segmentation window, respectively. It is observed that across all baselines, the classification accuracy is lower compared to the non-segmented case. The best baseline approach MLP realizes an f1-score of 0.8924. Auto-Sklearn achieves the best result (f1-score of 0.9576). Overall, the standard deviations are lower compared to the non-segmented case. In summary, Auto-Sklearn performs well in a short amount of time whereas the baselines from the literature provide poor results. Auto-Sklearn also achieves robust monitoring results in both the segmented and non-segmented case.
Furthermore, it is evaluated how often a certain classifier and feature preprocessing method is considered by Auto-Sklearn. Figure 8 illustrates that RF is most commonly selected by Auto-Sklearn in case of non-segmented test cycles. It is noticeable that no preprocessing is applied most frequently.
The final f1-score depends significantly on the preset time budget of Auto-Sklearn. Figure 9 illustrates the incumbent changes of Auto-Sklearn and the best baseline approach over time. Thereby, incumbent denotes the currently best hyperparameter configuration. Auto-Sklearn outperforms the best baseline approach after a few seconds.
Furthermore, the evaluation mode is adapted in a further step. In the previous evaluation mode, the time series of all machine tools are combined into one data set and randomly shuffled. As shown in Chapter 3, the distributions and value ranges of the sensor data, especially the torque, vary between the respective machine tools. Therefore, the question arises how robust the monitoring system is for new and unseen ball screws. In the adapted evaluation mode, the data is iteratively partitioned so that Auto-Sklearn is applied to the ball screws of each machine tool separately (outer ball screw cross-validation mode). In each iteration, data from one ball screw is included in the test set and data from the remaining ball screws are included in the training set. To optimize Auto-Sklearn, a fivefold inner cross-validation is performed using training data. The ensemble size is set to 1 for Auto-Sklearn. The results for an ensemble size of 10 is shown in the appendix. For the ball screws that contain anomalies (Bs7-pre, Bs11-pre, Bs12-pre, Bs13-pre), the f1-metric is used to evaluate the monitoring quality. For the remaining ball screws that do not contain faulty time series, the false alarm rate FAR according to Eq. (21) is used for evaluation: The false alarm rate FAR is calculated by dividing the normal condition time series that are falsely declared as faulty time series by all normal condition time series to be tested. The results of the evaluation are shown in Table 6.
The evaluation is performed considering segmented and non-segmented time series and different sensor groups. It should be noted that ball screw Bs12-pre is in a faulty state when the data acquisition started. It is observed that the number of detected faulty time series is significantly lower compared to the previous evaluation mode. This is due to misclassified normal cycles number of normal cycles . the fact that the sensor value trajectories and distributions differ for each ball screw. In addition, the adaptive evaluation mode provides significantly fewer fault data to learn anomaly patterns in cases where faulty ball screw drives are tested. Condition changes are detected only in advanced faulty states in the case of ball screws bs7-pre, bs11-pre, and bs13-pre. As a result, the number of available fault data is not sufficient to detect incipient anomalies in the transition phase. Condition changes of ball screw bs7-pre are only detected using the acceleration sensors Acc 1−3 . Considering the acceleration signals Acc 1−3 , a larger number of faulty test cycles are detected compared to using the torque signal M BSD . Therefore, it is concluded that the torque signal is not sufficient for robust detection of faulty conditions. When utilizing all available sensor signals ( M BSD , Acc 1−3 , Spi ), condition changes of ball screws bs11-pre, bs12-pre, and bs13-pre are detected in the segmented and non-segmented case. Due to the lower amount of detected fault cycles, the acceleration and torque signals should be evaluated separately. However, the false alarm rate is the lowest across all ball screws considering all available sensors.
Semi-supervised anomaly detection
The first step evaluates the suitability of signal threshold-based approaches for semi-supervised anomaly detection of ball screw drives in machine tool fleets. These approaches are applied when limited or no information about faults is available. The results for the segmented sensor signals are presented since the monitoring quality is superior compared to the non-segmented case. The signal threshold-based approaches are applied first. According to Eq. (11), fixed limits are determined for various signal features based on the test cycles that describe the normal condition. However, a variety of challenges exist in the application of fixed limits. This approach is suitable for simple anomalies where complicated interactions between signal features do not need to be evaluated. Figure 10 illustrates the fixed limits ( = 1.1, 1.2 ) for the peak-to-peak value of the segmented torque signal M BSD computed based on the first ten normal running test cycles. Condition changes are reliably detected in the case of ball screw bs11-pre. It is observed that in the case of ball screws bs7-pre, bs12-pre, and bs13-pre, the feature changes with the replacement of the ball screw rather than with the occurrence of the anomaly. Similarly, in case ball screw bs13-pre, the feature changes at the beginning of data recording, so anomalies are not detected. In addition, false alarms are issued for the peak-to-peak feature in case of ball screws bs3, bs8, and bs9 without any anomalies occurring. In summary, fault patterns vary, and thus, the present monitoring problem cannot be solved considering single features without evaluating interactions of features. Some sensor features vary independently of the ball screw condition which increases the risk of false alarms. This is also true for the triaxial accelerometer signals Acc 1−3 and the spindle accelerometer Spi.
In addition, the monitoring quality of the tolerance bands presented in Chapter 4.2.2 is evaluated. In Fig. 11, tolerance bands ( = 6 , = 0.4 ) using the segmented torque M BSD and the acceleration signal Acc 1 of ball screw bs11-pre are shown. Signal changes in the case of torque and acceleration signals are not detected as the anomalies occur. The number of false alarms increases significantly when the safety factor is reduced. It should be noted that tolerance bands only evaluate the time domain of the signals. In summary, it can be stated that the presented threshold-based approaches are not suitable for robust ball screw monitoring in machine tools. The next step uses uniform outlier scores for ball screw drive monitoring. For this purpose, a so-called baseline model is trained based on data describing the normal condition of ball screws. The outlier score is used as a health indicator to evaluate the ball screw condition. Since no fault data is available, the outlier score is calculated based on certain feature groups described in Chapter 4.2. In addition, the evaluation of the monitoring quality for the torque M BSD and acceleration signals Acc 1−3 is performed separately. This is due to the fact that condition changes are not always visible in the torque signals (e.g., for ball screw Bs7-pre). Consequently, the number of detected anomalies is reduced by combining the outlier scores of the torque and the acceleration signals.
The KNN-score is utilized to produce outlier scores. The number of k-nearest neighbors ( k = 5) and the distance metric (Minkowski metric) are chosen. The risk factor is set to 10 −5 . The PyOD python library [55] is applied to calculate the raw values of the outlier scores. Gaussian scaling is first implemented to normalize the outlier scores. The outlier scores of the triaxial accelerometers Acc 1−3 are aggregated into an ensemble using Eq. (9). This step is necessary because these signals vary significantly compared to the torque signal in the normal state, increasing the risk of false alarms.
Overall, two approaches are evaluated to split the dataset and apply the baseline model. The first approach performs a ball screw cross-validation. In each iteration, one ball screw is used as the test data set. The remaining ball screws without anomalies represent the training data set. Table 7 depicts the results of the evaluation. It is observed that in the case of torque M BSD , faulty states are detected for ball screw Bs11pre by using all feature groups. However, the number of detected faulty test cycles depends on the feature group used. The highest f1-score of 85.42 is obtained using the peaks of the frequency spectrum. No faulty test cycles are detected for the Bs7-pre and Bs13-pre ball screws. This is due to the fact that no changes in the torque signal occur in the case of the Bs7-pre ball screw. In the case of the Acc 1−3 accelerometers, no faulty test cycles are detected overall. The result indicates that this application method is unsuitable for robust monitoring regarding the low number of detected anomalies.
The evaluation mode is changed in the second step. In each iteration, only the data of the particular ball screw to be tested is considered (separate training mode). The initial training database represents the first 10 test cycles of the tested ball screw. For all remaining test cycles without anomalies of the same ball screw, it is iteratively checked whether false alarms are issued. After each iteration, the tested test cycle is added to the training database. For those ball screws without anomalies, the false alarm rate FAR is calculated. For all other ball screws containing faulty test cycles, the f1-score is applied to determine the monitoring quality.
The evaluation results are presented in Table 8. For the torque M BSD signals, an f1-score of 98.18 is obtained using the peaks of the frequency spectrum for ball screw Bs11-pre. In addition, faulty test cycles are also detected for ball screw Bs13-pre (f1-score: 72.41). It is recognized that the number of false alarms increased significantly compared to the first evaluation mode. This is caused by a lower number of training samples. False alarms are generated in the case of 5 ball Subsequently, gamma scaling is applied to normalize the regularized outlier scores. The corresponding results are illustrated in Table 9. In comparison with Gaussian scaling, the number of false alarms is reduced. Robust monitoring results are obtained for the torque signals M BSD considering the peaks of the frequency spectrum. Condition changes are detected for ball screws Bs11pre and Bs13-pre. At the same time, no false alarms are produced. No false alarms are generated in the case of ball screw bs6 despite signal changes in the frequency spectrum of the torque signal. This is due to the fact that these signal changes occurred in the first test cycles which are part of the initial training database. For the acceleration signals Acc 1−3 , the features of the autocovariance are suitable for monitoring. However, the number of detected faulty test cycles is lower than the torque for the Bs11-pre (f1-score: 58.97) and Bs13-pre (f1-score: 36.36) ball screws. Comparing the results between Table 7 and Table 9, it is evident that the monitoring quality is significantly increased by the separated training of the baseline model. Apart from ball screw bs11-pre, condition changes of ball screw bs13-pre are also detected in the separate training mode.
In summary, the separate training of the baseline model is necessary because the distribution of the sensor data for each machine tool shows significant differences. In addition to Gaussian and gamma scaling, linear scaling is also applied to normalize the outlier scores. However, the number of false alarms generated is significantly higher than Gaussian and gamma scaling.
Conclusion
This paper presents machine learning approaches for ball screw drive monitoring in machine tool fleets. The data set originates from test cycles of thirteen identical 5-axis machine tools used in series production. The results are as follows: 1. Challenges in ball screw drive monitoring consist of the limited amount of fault data and changes in the monitoring signals in the normal state. 2. The data analysis reveals that the internal control data (torque) evaluation is insufficient for detecting condition changes in all ball screw drives. 3. Supervised machine learning methods are suitable for data-based ball screw anomaly detection in case the condition labels are given. In this context, a monitoring approach based on automated machine learning is developed to detect condition changes. Several strategies are examined to split the data to achieve the highest possible generalizability and robustness. The proposed approach achieved better classification results compared to literature approaches. Taking into account external sensors (acceleration data), condition changes are correctly detected for all ball screw drives. However, the available data are not sufficient to learn the transition phase between normal and faulty states. 4. In addition, a semi-supervised anomaly detection approach based on uniform outlier scores is applied. A baseline model is used to learn the normal state of the ball screw drives. Condition changes are detected using an outlier score of the baseline model. By using unified outlier scores it is possible to build robust ensembles of acceleration signals to prevent false alarms. Robust results are obtained applying the k-nearest neighbor outlier score and gamma scaling. It is found that a baseline model should be trained specifically for each ball screw separately. In addition, the sensor signals should be evaluated separately in the semi-supervised anomaly detection mode. The presented approach achieves a better monitoring quality than signal threshold-based approaches such as tolerance bands and fixed limits. | 11,573.8 | 2023-05-23T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Renewing a Prophetic Mysticism for Teaching Children Justly: A Lasallian Provocation
: There is an urgent need to renew religious charisms historically founded by teaching Religious Orders to invigorate and sustain God’s mission through Catholic education. It is within this need that I consider how the Lasallian tradition may be critically mined to develop a prophetic mysticism that integrates contemplation with the public activity of teaching children justly as prophetic witness in contemporary Catholic education. This article makes two contributions. First, it method-ologically brings the Lasallian tradition into dialogue with the contemporary turn to children and childhood in theological research. I suggest that this turn presses us to re-commit to a preferential option for children in Christian mission, which serves as an interpretive lens to retrieve and develop a Lasallian prophetic mysticism. This lens allows us to see more clearly how God calls forth the Christian vocation of teaching through children as vulnerable agents who share and participate in life with us. Second, building on this prophetic mysticism, I propose a praxis of socially engaged contemplation that attunes Catholic educators to become ethically present to the social marginalization of children. Cultivating this ethical presence is necessary for teaching children justly—a moral imperative that has become all the more crucial today in light of reports on the sexual abuse of children in the Catholic church.
Introduction
The renewal of charisms historically founded by teaching Religious Orders remains an urgent task in Catholic education, in light of the declining number of religious sisters, brothers and priests in schools (Grace and O'Keefe 2007;Earl 2007;Lydon 2009). This demographic change presents a key challenge worldwide as to whether and how religious charisms could be transmitted to and maintained by lay school leaders and teachers who are now the majority in Catholic educational institutions (Lydon 2009). As pointed out in the CCE's 1982 document Lay Catholics in Schools: Witnesses to Faith, "it is the lay teachers, and indeed all persons, believers or not, who will substantially determine whether or not a [Catholic] school realizes its aims and accomplishes its objectives" (Congregation for Catholic Education 1982, para. 1). A major task here is the extent to which lay educatorssome of whom may not be Catholic or have a religious affiliation-could be formed to embody the charisms. What is at stake is the distinctive influence these charisms have on the Catholic identity and mission of religious sponsored schools. Religious educator Thomas Groome goes as far to say that "if the foundation charisms [of religious institutions] cannot be broken open among teaching colleagues, there will be no alternative but to call it [the Catholic educational project] off" (cited in Lydon 2009, p. 51).
In this article, I prefer to use the term 'renewal' to 'transmission' or 'maintenance' (cf. Lydon 2009) because it captures more closely the living dynamism of charism as the creative gift of God's Spirit in life. As theologian Bernard Lee (2004) argues, "Charism is not property. It is not transferrable. It is not transmittable. And it is not controllable". Instead, it is "reinvented, posited, in a new socio-historical setting, but never simply reenacted" (p. 16).
Charism is "reinvented . . . when a community's deep story speaks effective, felt words to the transformation of some of the world's most pressing needs and aspirations" (ibid., p. 5). The content of charism is relationality, impelled by God's Spirit as transformative grace that calls us to lived discipleship, responsive to mission in this world "where God's intentions for history are to be lived" (ibid., p. 4). Thus, charism cannot simply be recovered as some pre-packaged good handed on unilaterally by members of religious congregation to the laity. Its renewal is an interpretive task that engages in a "process of retrieval, critique, and reconstruction" (McCarthy 2000, p. 202). It involves "an imaginative use of tradition" (Sheldrake 1991, p. 168). That is, how might the wisdom found in religious charisms be broken open, re-imagined, and embodied in fresh ways that invigorate God's mission in today's world through Catholic education? I address this question with a focus on the Lasallian charism, bringing it into dialogue with the current theological interest in children and childhood (for example Berryman 2009Berryman , 2017Bunge 2001Bunge , 2006aDillen and Pollefeyt 2010;Jensen 2005;Miller-McLemore 2003Prevette et al. 2014;Strhan et al. 2017;Wall 2010aWall , 2010bWall , 2017. This set of literature presses us to commit in Christian mission to a preferential option for children, which serves as an interpretive lens to retrieve and develop a Lasallian prophetic mysticism that discerns God calling forth the Christian vocation of teaching through children as vulnerable agents who share and participate in life with us. To be clear, my analysis attends more to the 'what' of charism than the 'how' of its transmission, which, as Lydon (2009) points out, "the literature is replete with" (p. 51). Yet, insufficient attention has been given to distill the deeper inner dynamics of charism at work, beyond the typical articulation of it as a seemingly static set of characteristics, precepts or even values that define a religious congregation's and/or school's identity.
In this regard, a prophetic mysticism for teaching children justly is drawn out and developed as that deeper dynamic operative in the Lasallian charism. Such is a mysticism that awakens teachers to the abiding presence of God's Spirit who is ever enlivening and energising the everyday act of educating as "touch[ing] the hearts" of students (Meditations 139.3). 1 The mystical is also dialectically related to the prophetic; both mutually constitute each other to shape Christian public witness that enfleshes the passion and compassion of God's loving solidarity with the poor and marginalised (Bingemer 2020;Egan 2001;Sheldrake 2005). Lasallian mysticism of faith is thus constitutive of the call to teach children justly as an act of prophetic witness with zeal. Lasallian prophetic mysticism integrates contemplation with the public activity of educating children justly. In doing so, it cultivates a praxis of socially engaged contemplation that attunes Catholic educators to become ethically present to the social marginalization of children.
Why a Prophetic Mysticism to Teach Children Justly?
This renewal of a Lasallian prophetic mysticism is significant in light of the turn to children and childhood in contemporary Catholic and Protestant theology. This turn is a response to how "issues related to children have tended to be marginal in almost every area of contemporary theology" (Bunge 2001, p. 3). While the care for children and their instruction in faith are enduring themes in the Christian tradition, there is a lack of systematic and critical attention given to how understandings of children and childhood are theologically constructed. Sustaining this theological turn is a continued concern for our ethical obligations to children not only in families but also at the levels of the church and the state (Bunge 2001). This concern is critical in light of the sexual abuse of children by religious authorities. It is also fueled by the promulgation of the United Nations Convention on the Rights of the Child (UNCRC), which pushes churches to critically reflect on their theological foundations so as to better serve child advocacy while attending to the moral and spiritual formation of children (for example McAleese 2019; Wall 2010bWall , 2017Werpehowski 2012). This theological turn to children and childhood, I argue, presses us to commit to a preferential option for children in Christian mission. It is within this option for children that I anchor the renewal of a Lasallian prophetic mysticism for Catholic education.
Preferential Option for Children in Christian Mission
To be clear, an option for children does not replace a preferential option for the poor, but builds on it by recognising the systemic marginalisation of children as a socially diverse group. Children as defined by the United Nations refer to persons under the age of eighteen. This article uses this broad definition but acknowledges that more than just age, 'child' and 'children' carry "different meanings across diverse cultures and societies" (Wall 2017, p. 5). 'Childhood' is relationally shaped and experienced contextually by children with others in society, at the complex intersection of biological, developmental, socio-economic and political factors (Pang 2021). Yet, children globally are still the least among the socially vulnerable in their situated relationships of dependence on and interdependency with adults in society. Some children also find themselves already discriminated and marginalized at the complex intersection of multiple identity markers such as class, race, nationality, able-ness, religion, gender and sexuality. A preferential option for children calls us to confront how it is that some children do not survive well into adulthood. Whose children are these? What is going on, and what is God calling us to do?
Two themes in the theological research on children and childhood call us to reclaim a preferential option for children in Christian mission: (i) an ethic of justice for children; and (ii) the relational agency of children as integral to their dignity as human persons.
(i) Ethic of Justice for Children An ethic of justice for children is called for in view of how their human flourishing continues to be threatened by structural conditions of violence, poverty, disease and malnutrition. Yet, as Whitmore and Winwright (1997) have pointed out, children as human subjects in their own right and who they are remain an "undeveloped theme" in Catholic teaching, which "subsumes its treatment of children under the rubric of the family" (p. 161). While Catholic teaching would uphold the intrinsic dignity of children as being made in God's image and likeness, it also positions them as passive. "For the most part, church teaching simply admonishes the parents to educate their children in the faith and for children to obey their parents" (ibid., p. 162). It does not adequately address the social suffering of children caused by structural forms of injustice, some of which is reproduced and inflicted in family life. The global concern for the world's children calls for more complicated responses not only from the family, but beyond it and in connection with the state and other institutions.
Similarly, Ethna Regan (2014) highlights that children are "barely visible" in Catholic social thought (p. 1021). In her analysis, she singles out Evangelium Vitae by John Paul II for its "most extensive discussion of the child within Catholic social teaching" (ibid., p. 1026). However, the attention is skewed toward the unborn, reflecting a "hyper-natalism" that "does not defend the lives of born-hungry, impoverished, exploited, abandoned-children with the same zeal as the defence of the unborn child" (ibid., p. 1027). Regan argues for the need to articulate a "consistent ethic" (ibid, p. 1027) that corrects this hyper-natalism, especially in light of more reports on child sexual abuse in the Catholic Church: Children have become a new measure of justice for the church ad intra, a measure that will determine our credibility to speak on matters of justice for children, born and unborn, in a world where poor children continue to suffer from having too much to bear and from given too little to develop properly. (ibid., p. 1030) What is at stake is the Church's obligation and credibility as witness to God's mission of caring justly for children as an imperative of the Gospel.
Contemporary theological work on children and childhood has lifted up and reemphasised this biblical mandate to welcome, care and advocate for children as acts of Christian discipleship (cf. Mark 9: 33-37, 10: 14-16; Matthew 18: 2-6; Luke 9: 48). Biblical scholar Judith Gundry-Volf (2001), for example, reclaims the radicality of Jesus's teaching on children in the Synoptic Gospels: Jesus did not just teach how to make an adult world kinder and more just for children; he taught the arrival of a social world in part defined by and organized around children. (p. 60) One hears in these words the prophetic call to reclaim and re-commit to a preferential option for children in Christian mission. An ethic of justice that serves this option stems first from our recognition of children as active "representatives of Christ" (Gundry-Volf 2001, p. 60). Children reveal a glimpse of God's presence that transforms us in our common journey together in a life of faith. Beneath the social vulnerability of children is a fragile agency that an ethic of justice ought to protect and promote.
(ii) Relational agency of children The conception of children as agents is another important theme in the anthropological focus within the theological turn to children and childhood. This interest in children's agency stems partly from a social constructivist paradigm of childhood that repositions the passivity of children to being active makers of meaning in their life worlds (Hyde et al. 2010;James and Prout 1997;Wells 2015). Children do not have agency only when they become adults. They are agents as relational human beings from the outset, making a difference "with and to others" by the meanings they co-construct in socially situated ways (Pang 2021, p. 93). As David Oswell (2013) points out, agency is "always relational and never a property; it is always in-between and interstitial" (p. 270) Relational agency is realised in-between "[a]dults and children [who] belong together and contribute to each other's lives" (Sturm 1992, p. 158).
This interest in children's relational agency has pushed theologians and religion scholars to critique the often oversimplified ways in which the nature of children has been articulated in our religious traditions. As ethicist John Wall (2010b) highlights, "It is remarkable that, while Christianity has consistently held up humanity's ambiguous nature as simultaneously good and sinful overall, when it comes to children it has generally swung to one extreme or the other" (p. 255). Such unidimensional views "diminish their complexity and integrity, fostering narrow understandings of adult-child relationships" (Bunge 2006b, p. 54). The task, then, is to draw out more nuanced and diverse theological interpretations that reflect the complex wholeness of children as relational human beings. For example, theologian Bunge (2006b) has argued for the need to uphold "the complexity and dignity of children" by holding in tension six "paradoxical perspectives": children as "gifts of God and sources of joy"; "sinful creatures and moral agents"; "developing beings who need instruction and guidance"; "being made in the image of God"; "models of faith and sources of revelation"; and "orphans, neighbors, and strangers in need of justice and compassion" (pp. 58-62).
These six perspectives shape what theologian D.J. Konz (2014) has conceived as "mission postures" (p. 23) of the church toward children in Christian history. Konz particularly points out the need to be critically reflexive of adult-centric assumptions that place children as passive unformed adults in Christian mission. Recognising "the many childs of Christian history . . . alerts us to our adult tendency . . . to 'construct' what we understand a child to be" (Konz 2014, p. 23, italics his). Underscored in Bunge (2006b) and Konz (2014) is a move to critique and reconstruct our theological anthropology by encountering real children in their lived realities, making a space for their relational agency as integral to being fully human and made in God's image and likeness.
The conception of children as agents thus broadens the anthropological foundation for a preferential option for children, which does not only commit us to struggle against structural injustices that refuse them possibilities in this present life. It also calls us to promote their sense of agency in the here and now as responsible protagonists of social change. An ethic of justice for children (as discussed earlier) that protects them in their social vulnerability should also recognise and cultivate their relational agency. To be clear, the social vulnerability of children is not diametrically opposed to their relational agency. In fact, children's social vulnerability also lies in how their agency can easily be obscured, stifled, manipulated and/or exploited within structural relations of adult-centric power.
Implications for Catholic Education
This preferential option for children in mission must also push us to reflect on whether and how Catholic schools are prophetic spaces that educate children justly. To this end, Mary Doyle Roche (2009) draws on the common good to structure a vision of educating justly in Catholic schools. More crucially, she argues that the common good must necessarily include children's participation as vulnerable agents in their own right. This participation realises the intrinsic dignity of who they already are as God's children. Thus: An adequate vision of the common good must account for the vulnerabilities and the possibilities of children and childhood, and bring children in from the margins to the center to insure that our assumptions about the "common" good are not distorted by the perspective of those in positions of power and privilege. With children's experiences at the center, the common good of society allows for children as individuals, as members of families and other communities to flourish. (Roche 2009, p. 91) Children do not contribute to the common good only when they become adults. They do so as they are while growing to learn and live responsibly with others different from them. For Roche, the common good serves as a relational ethic that counters the transactional, performance-driven and competitive culture engendered by the commodification of education. It reclaims the importance of a communal anthropology in Catholic schooling that resists the more limited vision of child as burden, client, and future worker in market-based educational reforms (Roche 2009;Whitmore and Winwright 1997). This is a communal anthropology that encourages the agentic participation of children as social protagonists of justice in the schools themselves. Roche's proposal thus echoes a preferential option for children, which makes a claim on Catholic schools to create the conditions for the flourishing of children's lives as prophetic work.
Roche's vision of educating toward social justice for and with children calls for a prophetic mysticism of teaching that serves the liberation of children from conditions that trivialise, violate and/or deny their intrinsic human dignity as God's children. This is a prophetic mysticism that cultivates a contemplative way of being with children that is socially engaged. It awakens educators to the sense of being called to teach justly by "God's disturbing presence" (Gittins 2002, p. 43) through children, especially in situations of impoverishment suffered by them. In this regard, I turn to the Lasallian charism as one of many Christian sources of spirituality for inspiration, retrieving a prophetic mysticism of faith for teaching children justly. Not only is this retrieval significant in light of a recommitment to a preferential option for children within the contemporary theological turn to children and childhood. As I will argue, the focus on children as relational agents also allow us to re-read and deepen the dynamic of call in Lasallian mysticism: the sense that children are as much bearers of God's presence who call forth the prophetic witness of the Christian educator to teach justly.
The Lasallian Charism
The Lasallian tradition originates from John Baptist de La Salle (1651-1719), patron saint for Christian educators of the young and founder of the Institute of the Brothers of the Christian Schools in seventeenth century France. From its inception of Christian schools for poor boys in Rheims, the Institute has today a worldwide educational mission shared by the De La Salle Brothers and lay partners from across different faiths and cultures, and in an international network of Lasallian schools and organizations focused on the human and Christian education of the young. In this section, I draw on De La Salle's writings, the Institute's documents, and contemporary Lasallian scholarship to distill and articulate a prophetic mysticism of faith. Some may question if it is critically limiting to draw on Lasallian scholarship produced internally by the Brothers for the Institute. Yet, this is not necessarily so as these internal sources shed light on how the Lasallian charism is understood and articulated from within the Institute through time. To the best of my knowledge, these sources have also not been widely made known outside the Institute to a larger academic community, and they should.
The Lasallian charism, as understood by the Institute, is connected to, but not synonymous with the charism of De La Salle (Schneider 2006. 2 Following Lee (2004, the Lasallian charism is dynamically renewed in response to mission discerned in this moment of the world's history. Yet, the manner of its renewal returns to the "deep story" (Lee 2004, p. 24) of De La Salle's founding vision that inspires the present with its living wisdom. This wisdom is found in a spirituality that grounds the call to teach children justly in an incarnational vision of education.
Teaching Children Justly: A Commitment to a Preferential Option for Children
Typically, the Institute speaks of its commitment to social justice as integral to its educational service to the poor. As noted in the recently published Declaration on the Lasallian Education: "The educational service of the poor is, in essence, a service to the cause of justice that, in turn, promotes equitable, inclusive societies respectful of the dignity of people and attentive to the full satisfaction of their needs" (Brothers of the Christian Schools 2020, p. 89, emphasis mine).
This conviction goes back to the founding of the Christian Schools established gratuitously for teaching poor boys how to read and write. Their origin was counter-cultural in light of the dominant belief in 17th century France that one was born into a particular stratum of society, and that the socio-economic hierarchy was inevitable. On the one hand, there were proponents such as Charles Démia of Lyon who pleaded urgently for schools to educate the city's poor children as a matter of enabling upward social mobility. On the other hand, opponents such as La Chalotais argued against any generalized instruction of the poor: "The good of society demands that knowledge of the people not surpass that which is necessary for their work. Each man who looks beyond his sad trade will not dedicate himself to it with diligence and patience" (cited in Hengemüle 2016, p. 16). Opposition also came from poor parents, who did not view education as necessary for their children.
De La Salle, however, went against the status quo, not as an educational reformer but as a priest and canon from the perspective of Christian faith. For him, education was necessary for all persons-including the poor-to know God's goodness and to realize their God-given potential in this life: God is so good that, having created us, he wills that all of us come to the knowledge of the truth. This truth is God himself and what he has desired to reveal to us through Jesus Christ, through the holy apostles, and through his Church. This is why God wills all people to be instructed, so that their minds may be enlightened by the light of faith. (M. 193.1) Priority was given to the poor because Jesus came to be with the poor, and as the poor. The poor, he writes, are "images of Jesus Christ . . . who are best disposed to receive his Spirit in abundance" (M. 173.1). They share an equal and noble dignity before God as God's children.
The early Christian Schools posed a prophetic challenge to the social order in promoting universal access to quality education. Looking back, the Declaration interprets their founding as being a "Lasallian enterprise . . . born on the borders of dehumanization" (Brothers of the Christian Schools 2020, p. 92). From the standpoint of contemporary theological research on children and childhood, however, I would like to draw out more explicitly a particular option for children present in its tradition of service to the poor. An option for children does not replace a missional priority on the poor, but deepens it by calling attention to how they are affected the most by social, economic and political conditions that impoverish human life. Lasallian scholar Jean-Louis Schneider (2006) notes, "Lasallian charism was born in a certain environment: that of the educational movement for the children of the poor (or of the working classes) of the Church of the Council of Trent, in France" (p. 54). As stated in the Rule for the Institute of the Brothers in 1718: The necessity of this Institute is very great, because the working class and the poor, being usually little instructed and occupied all day in gaining a livelihood for themselves and their children, cannot give them the instruction they need and a respectable Christian education. nor a suitable education. It was to procure this advantage for the children of the working class and of the poor that the Christian Schools were established. (para. 4-5, cited in De La Salle 2002) De La Salle was stirred not only by the needs of artisans and the poor, but also by how their conditions had given rise to the neglect and abandonment of children. His educational vision was a response in faith to the social suffering of children. The Brothers were to "look upon the children whom [they] are charged to teach as poor, abandoned orphans" in the same way as God "looks on them with compassion and takes care of them as being their protector" (M. 37.3).
This preferential option for children is most notably reflected in the Institute's decision to incorporate the defence and promotion of children's rights as integral to the Lasallian educational mission since 2000. This move was catalyzed in part by Brother John Johnston, FSC, who as Brother Superior General in 1999, issued a groundbreaking pastoral letter-On the Defense of Children, the Reign of God, and the Lasallian Mission. 3 Urging the Brothers and the wider Lasallian community to re-imagine its educational service to the poor through the lens of the United Nations Convention on the Rights of the Child (UNCRC), he writes: The thesis of this pastoral letter is that the situation of poor children in today's world is an unspeakable scandal that our Lasallian charism invites us to make solidarity with neglected, abandoned, marginalized, and exploited children a particular focus for our mission. (Johnston 2016, p. 466) Significantly, he recalls the Institute's Rule to frame the preferential option for children as a Lasallian imperative: Our Rule concisely and poignantly links De La Salle's progressive awareness of the situation of poor children with the origin and development of the Institute. As he became aware, by God's grace, of the human and spiritual distress of 'the children of artisans and of the poor,' their neglect and abandonment moved him profoundly. (Johnston 2016, p. 457, emphasis his) Johnston re-interprets this founding insight to include the violation of children's rights in situations of economic exploitation, discrimination, sexual abuse, illiteracy, violence and armed conflict. The Christian mandate to educate the poor must also now engage in the struggle for the human rights of children as a matter of justice, "in accord with what the Reign of God requires" (Johnston 2016, p. 459).
What I wish to highlight is how a preferential option for children serves as a hermeneutic that reads forward the mission of Lasallian education to the poor. Who the poor are in Lasallian mission has also shifted. From the lens of children's rights, the focus is not simply on materially poor children or even the structural conditions that impoverish their lives. It also points to the social marginalization of children as the poor, in not having their concerns heard and a voice to speak. The impetus for the renewal of the Lasallian charism, I suggest, lies in the Institute's preferential option for children prophetically committed to their liberation through education in two senses: first, freedom from dehumanizing conditions that threaten the survival of children and violate their human dignity; second, freedom for their participation in the social fabric of life through a sense of belonging in the world as responsible agents and protagonists of social change. Yet, this liberatory impulse to educate children justly is discerned and renewed by a prophetic mysticism that grounds teaching as a call and practice of incarnational presence.
Prophetic Mysticism That Grounds Teaching as Incarnational Presence
For De La Salle, effective Christian education of the young depends on quality teachers who must not only be pedagogically skilled. They must also be persons of faith and moral integrity. Yet, one of the challenges faced in the early founding of the Christian schools had been the difficulty of having schoolmasters of good character. As Edward Fitzpatrick (1951) points out, "the men who drifted into teaching were too often what might be called the dregs of humanity" (p. 209). There was also no privilege in teaching poor children. The spiritual formation of teachers was De La Salle's response to this challenge, so as to stabilise a community of committed educators and sustain their sense of mission. It was essential to inspire and form these "bedraggled schoolteachers" (Salm 2017, p. 151) to see their work as a sacred calling tied to the human dignity of the poor children they served.
To inspire in them a sense of teaching as a vocation, De La Salle drew heavily on metaphors in his meditations to shape their educational imagination spiritually. These metaphors are still formative for us today. Christian educators are "ambassadors and ministers of Jesus Christ" (M. 195.2), called to "announce the Gospel of the kingdom of God" (M. 199.2) by the witness of their lives. Just as Jesus Christ "the good shepherd who has great care for the sheep," they are obliged to know and understand each student so as "to discern the right way to guide them" (M. 33.1). As guides and companions to the young, Christian educators also serve as "Guardian Angels" (M. 198.1). Ultimately, Christian educators are called to co-operate with Jesus Christ in the power of the Spirit to "touch [the] hearts" (M. 43.3) of students, drawing them to know and love God. Students are "a letter which Jesus Christ dictates to you [the educator], which you write each day in their hearts, not with ink, but by the Spirit of the living God, who acts in you and by you through the power of Jesus Christ" (M. 195.2).
Scholarship on Lasallian spirituality has well exegeted these images (for example Fitzpatrick 1951;Marquiegui 2018;Wright 2017) to underscore teaching as a relational practice of incarnational presence. However, as Lasallian scholar Miguel Campos (2012) highlights, the dominant accent in much of this scholarship has been on De La Salle's asceticism and the practice of virtues, driven in part by a theology of religious life that stresses on Christian perfection. The result is a prominence placed on an imitative model of discipleship, where the Christian educator follows the example of Jesus Christ in the practice of virtues that inspires students to do the same. S/he "must act as representing Jesus Christ himself" (M. 195.2). What often becomes obscured is the contemplative depth of the "mystical and ministerial thrust" (Campos 2012, p. 1) in De La Salle's spirituality. Yet, a key strand that renders depth to his spiritual writings is this: teaching as incarnational presence begins with a contemplative attentiveness and receptivity to Jesus Christ present in each child as God's own.
As Retrievable from this meditation is a dynamic of call. God calls us to the vocation of teaching through children, by becoming one of them and with them in Jesus Christ. The Christian vocation of teaching is thus rooted in a mysticism that sees God's presence in and through children as gift, which simultaneously calls forth the task of nurture so as to build the Reign of God with them in the present. "It is God himself who has led them to you; it is God who makes you responsible for their salvation", (M. 37.1) writes De La Salle. In Bunge's (2006b) terms, De La Salle's writings place an accent on children as "developing beings who need instruction and guidance" (p. 60), as well as their marginal status as "orphans, neighbors, and strangers who need to be treated with justice and compassion" (p. 62; M. 37.3). Yet, children are also "sources or vehicles of revelation" (p. 61). They are vulnerable agents who mediate God's call to teach justly.
Inextricably bound up with this mysticism is the call of the Christian educator as prophetic witness. For De La Salle, the Magi are also an image of prophetic resistance in unsettling Herod's kingship by their search for the Infant Child: What holy audacity in our Magi, to enter the capital and make their way even to Herod's throne! They feared nothing because the faith inspired them and the grandeur of [Christ] whom they were seeking caused them to forget and even to scorn all human considerations, considering the king to whom they were speaking to be infinitely beneath the one announced to them by the star. (M. 96.2) Upon encountering the Infant Child, the Magi "left without concerning themselves any further about King Herod" (M. 96.2). De La Salle connects their refusal to cooperate with Herod to the prophetic stance of the Christian educator: "So, too, should faith make you despise all that the world esteems" (M. 96.2). To qualify, De La Salle is not rejecting the world. Rather, in the words of biblical scholar Warren Carter (2002), De La Salle is challenging the Christian educator to "resist the empire's unjust commitments to power, wealth, and status" (p. 40). The prophetic witness of the Christian educator is mystically rooted in an interior conversion of the heart to the poor Christ in children.
Stemming from Lasallian mysticism, then, is a mode of discerning the prophetic contours of teaching as a Christian vocation from the standpoint of God's solidarity with children as the least amongst the poor in Christ. Lasallian scholar Luke Salm (2017) describes this mode as a movement of "double contemplation" that discerns on the one hand God's saving will for all, but from the concrete social realities of children on the other hand (p. 150). In the Lasallian tradition, the prophetic call to educate the poor is inseparable from the struggle for social justice that particularly lifts up the humanity of children who continue to be socially marginalized. A commitment to a preferential option for children calls for the renewal and development of such a prophetic mysticism that is socially responsive and praxis-oriented toward the liberation of children as Missio Dei in education.
Faith and Zeal as a Dynamic of Lasallian Prophetic Mysticism
Animating Lasallian prophetic mysticism is the twinned dynamic of Faith and Zeal as the Spirit of the Institute. According to Jacques Goussin (2003), the Spirit of Faith is "a Christian viewpoint, a way of seeing and judging that is in harmony with the Gospel" (p. 91). It confronts educators to "learn to see in every happening and in each person, especially in the poor, a sign and call of the Spirit" (Goussin 2003, p. 93). The Spirit of Faith is as such a principle in Lasallian discernment rooted in a conviction that sees no reality outside of God. This is a God who remains faithfully present to and in the world, dynamically involved in its transformation through education.
Thus, the Spirit of Faith transposes the witness of teachers into a contemplative key, to expect the more of God as living Mystery active in providing, sustaining, and drawing educational relationships into the fullness of divine life. It paradoxically demands that the teacher freely give of her/himself to participate in God's educational mission, to trust radically in the Providence of God with the conviction that "the One-Who-Calls creates us for vocation, a capacity for responding to relationship" (Cahalan 2017, p. 17). It is this faith that underpins De La Salle's invocation of prophet Habakkuk's words: "Lord, the work is yours" (Koch et al. 2004, p. 225). What one hears in these words is a hope-filled abandonment to an open future in God, whose Spirit calls teachers toward an ardent zeal to incarnate God's presence in the everyday educational activity with children.
Lasallian zeal, which flows from the Spirit of Faith, propels the prophetic witness of teachers patterned after Christ's kenotic love to be with, and of service to children. Zeal is incarnational faith in action: Let it be clear, then, in all your conduct towards the children who are entrusted to you that you look upon yourselves as ministers of God, carrying out your ministry with love and a sincere and true zeal, accepting with much patience the difficulties you have to suffer, willing to be despised by men to be persecuted, even to give your life for Jesus in the fulfillment of your ministry. (M. 201.1) This zeal to educate children for God's mission is thus impelled by God's saving love through Jesus Christ (M. 201.2). It is that fire in the belly, which charges teachers to announce the Gospel in the context of the school as Christ's body, whose members include children as fellow disciples and as "heirs of the kingdom of God" (M. 96.3; M. 201.2). To announce the Gospel, as Campos (1994) points out, "is not reduced to practices and prescriptions" (p. 424). Rather, it obliges educators to "become incarnate, that is, take on the flesh and blood realities of the students' lives in an affective and effective manner, to walk around in their shoes, to unite [their] own history to that of [their] students, to the whole history of salvation, to the mystery of Christ" (Campos 1994, p. 424). Lasallian zeal thus drives educators to become living witnesses of the Gospel by accompanying the young, guiding them to see that their lives matter because God has created them to bear goodness in the world. Lasallian zeal draws forth passion and perseverance from teachers to desire for and work toward the human flourishing of children as their students.
Toward a Praxis of Socially Engaged Contemplation
What ought to be reclaimed and renewed from the Lasallian charism is a prophetic mysticism for teaching children justly. This renewal is not only for the Institute, but also in service of Catholic education to take seriously a preferential option for children. To recall Regan (2014): "Children have become a new measure of justice for the church ad intra" (p. 1030). Lasallian prophetic mysticism consists of a mode of educational discernment that imbibes an ethic of justice for children. It offers a praxis of socially engaged contemplation that cultivates in educators an ethical presence necessary for teaching children justly in Catholic education.
In this regard, I turn to the writings of theologian and Lasallian scholar, Michel Sauvage (1999), who has conceived of the mystical-prophetic stance in De La Salle's spiritual doctrine as "mystical realism" (p. 224). As he contends, Lasallian mystical realism is distinctive because of its emergence from a spirituality that De La Salle developed specifically for the Brothers as educators. Such was a spirituality that arose gradually from "the concrete existential situation" of the Brothers in their relationships with one another, as well as with the young they instruct (ibid., p. 224). Sauvage further distills a four-fold "rhythm" to capture the inner dynamics of this mystical realism: (i) "to consider the concrete teaching situation"; (ii) "to contemplate the element of mystery involved with it"; (iii) "to make a renewed commitment to transform the present reality"; and (iv) "to be open to the transcendent and freely given Ultimate, that is, to the reality of God" (ibid., 224). I have visually represented this four-fold "rhythm" as iterative and cyclical (rather than linear) in Figure 1 below. This four-fold 'rhythm' captures the dialectical relationship between contemplation and social action in teaching. I suggest that it scaffolds a praxis of socially engaged contemplation which, from a Lasallian perspective, begins with the lived relationship between teacher and student. In considering the concrete teaching situation, Sauvage calls the educator to be attentive to the social realities of students and the demands they make not only on how one teaches, but also who one becomes as teacher: Look at the life you are living; be aware of the distressing situation of the youngsters that God has placed in your path; use that as a measure of what is at stake in your teaching service. (ibid., p. 225) The connection with the prophetic is in the third movement regarding transformative praxis. By this, Sauvage urges the educator to be pedagogically creative but not simply for the sake of being technically innovative. Such creativity is rooted in a participation with the Spirit that leads the teacher to live into the "mystery of [Christ's] struggle for justice" (ibid., 226). This four-fold 'rhythm' captures the dialectical relationship between contemplation and social action in teaching. I suggest that it scaffolds a praxis of socially engaged contemplation which, from a Lasallian perspective, begins with the lived relationship between teacher and student. In considering the concrete teaching situation, Sauvage calls the educator to be attentive to the social realities of students and the demands they make not only on how one teaches, but also who one becomes as teacher: Look at the life you are living; be aware of the distressing situation of the youngsters that God has placed in your path; use that as a measure of what is at stake in your teaching service. (ibid., p. 225) The connection with the prophetic is in the third movement regarding transformative praxis. By this, Sauvage urges the educator to be pedagogically creative but not simply for the sake of being technically innovative. Such creativity is rooted in a participation with the Spirit that leads the teacher to live into the "mystery of [Christ's] struggle for justice" (ibid., 226).
At the heart of this Lasallian mystical realism is teaching as an incarnational work of touching the hearts of children in faith and zeal. "You carry out a work that requires you to touch hearts, but this you cannot do except by the Spirit of God" (M. 43.3). Yet, we cannot touch the hearts of children as educators until we allow them to break open our own hearts. Recall that De La Salle was moved by the plight of children he encountered in their poverty. Recent theological interest in children and childhood pushes us to recognize more clearly this receptivity within Lasallian mysticism to the cry of God's Spirit in the lives of children as vulnerable agents. It is this cry that awakens the hearts of teachers and summons them to prophetic witness.
Being attentive to this cry is foundational to cultivating an ethical presence for teaching children justly. In Table 1, I offer a set of reflection questions alongside Sauvage's fourfold 'rhythm', structuring a praxis of socially engaged contemplation. These questions invite educators to remain open and critically attuned to what the presence of each child as student calls from them. At the heart of this Lasallian mystical realism is teaching as an incarnational work of touching the hearts of children in faith and zeal. "You carry out a work that requires you to touch hearts, but this you cannot do except by the Spirit of God" (M. 43.3). Yet, we cannot touch the hearts of children as educators until we allow them to break open our own hearts. Recall that De La Salle was moved by the plight of children he encountered in their poverty. Recent theological interest in children and childhood pushes us to recognize more clearly this receptivity within Lasallian mysticism to the cry of God's Spirit in the lives of children as vulnerable agents. It is this cry that awakens the hearts of teachers and summons them to prophetic witness.
Being attentive to this cry is foundational to cultivating an ethical presence for teaching children justly. In Table 1, I offer a set of reflection questions alongside Sauvage's four-fold 'rhythm', structuring a praxis of socially engaged contemplation. These questions invite educators to remain open and critically attuned to what the presence of each child as student calls from them. Table 1. Questions for a praxis of socially engaged contemplation in teaching.
Four-Fold Rhythm in Lasallian Mystical
Realism (Sauvage 1999) Praxis of Socially Engaged Contemplation Be open to reality of God These questions underline a central dynamic in Lasallian prophetic mysticism or mystical realism: the process of educating justly is rooted in an ongoing conversion by the teacher to each child in the classroom as God's child. This conversion is a journey that requires the teacher to be open to moments of disruption in their encounters with children. The challenge is to see such disruptions as opportunities for pause and a discernment of their educational priorities, commitments and convictions.
Conclusions
In this article, I mine the Lasallian tradition for a prophetic mysticism that integrates contemplation with the public activity of teaching children justly as prophetic witness in contemporary Catholic education. This retrieval is significant in relation to: (i) a preferential option for children called from within the turn to children and childhood in contemporary theological research; and (ii) the renewal of religious charisms for God's mission through Catholic education. Building on Gerald Grace's concept of 'spiritual capital', Lydon (2021) argues how it can be fruitfully sustained when charisms of Religious Orders are lived and modeled by religious and committed lay people. I agree with this emphasis on the "primacy of witness" (p. 77). Integral to such witness must also be a contemplative way of seeing and relating with others, in discernment of where the Spirit is calling us to in mission. What the Lasallian charism offers to Catholic education is a prophetic mysticism that calls us to be present to the Spirit in the social realities of children. It consists of a mode of discerning the prophetic contours of teaching as a Christian vocation from the standpoint of God's solidarity with children as the least amongst the poor in Christ. Teaching children justly is holy work because of the Holy One who dwells in them. To teach children justly is to hear and respond to God's call through them, who are bearers of hope in the Reign of God already here.
Funding: This research received no external funding.
Conflicts of Interest:
The author declares no conflict of interest.
1
Meditations is a reference to De La Salle (1994). Hereafter cited in text as M., followed by the numbering used in this text. 2 This recalls the distinction between a Pauline understanding of charism as gift for service to the common good and a Weberian notion of charismatic leadership, as discussed in Lydon (2009). 3 | 10,766 | 2022-09-23T00:00:00.000 | [
"Philosophy"
] |
Antimicrobial activities of green synthesized gums-stabilized nanoparticles loaded with flavonoids
Herein, we report green synthesized nanoparticles based on stabilization by plant gums, loaded with citrus fruits flavonoids Hesperidin (HDN) and Naringin (NRG) as novel antimicrobial agents against brain-eating amoebae and multi-drug resistant bacteria. Nanoparticles were thoroughly characterized by using zetasizer, zeta potential, atomic force microscopy, ultravoilet-visible and Fourier transform-infrared spectroscopic techniques. The size of these spherical nanoparticles was found to be in the range of 100–225 nm. The antiamoebic effects of these green synthesized Silver and Gold nanoparticles loaded with HDN and NRG were tested against Acanthamoeba castellanii and Naegleria fowleri, while antibacterial effects were evaluated against methicillin-resistant Staphylococcus aureus (MRSA) and neuropathogenic Escherichia coli K1. Amoebicidal assays revealed that HDN loaded Silver nanoparticles stabilized by gum acacia (GA-AgNPs-HDN) quantitatively abolished amoeba viability by 100%, while NRG loaded Gold nanoparticles stabilized by gum tragacanth (GT-AuNPs-NRG) significantly reduced the viability of A. castellanii and N. fowleri at 50 µg per mL. Furthermore, these nanoparticles inhibited the encystation and excystation by more than 85%, as well as GA-AgNPs-HDN only completely obliterated amoeba-mediated host cells cytopathogenicity. Whereas, GA-AgNPs-HDN exhibited significant bactericidal effects against MRSA and E. coli K1 and reduced bacterial-mediated host cells cytotoxicity. Notably, when tested against human cells, these nanoparticles showed minimal (23%) cytotoxicity at even higher concentration of 100 µg per mL as compared to 50 µg per mL used for antimicrobial assays. Hence, these novel nanoparticles formulations hold potential as therapeutic agents against infections caused by brain-eating amoebae, as well as multi-drug resistant bacteria, and recommend a step forward in drug development.
www.nature.com/scientificreports www.nature.com/scientificreports/ Determination of flavonoids loading efficiency. The loading of HDN/NRG on nanoparticles was determined spectrophotometrically. First the nanoparticles were centrifuged at 12,000 × g for 30 min. Supernatant (containing free drug) was discarded and pellet was collected and re-dispersed in methanol up to the final volume. HDN and NRG were detected and quantified at 285 nm 17,26 . The percentage of encapsulated flavonoids were calculated by using this formula: = × %Flavonoid loaded (Amount of flavonoid loaded/Total flavonoid used) 100 A. castellanii cultures. A. castellanii (ATCC 50492) a clinical strain belonging to the T4 genotype, was routinely cultured in 10 mL growth medium consisting of 0.75% w/v proteose peptone, 0.75% w/v yeast extract, and 1.5% w/v glucose (PYG) in 75-cm 2 tissue culture flasks at 30 °C as described previously 27 . Amoebicidal and encystation assays were performed with healthy A. castellanii trophozoites which are adherent to the surface of tissue culture flask. These active trophozoites were detached by putting culture flasks on ice for 15 min followed by gentle tapping for roughly 5 minutes after changing PYG medium with phosphate buffer saline (PBS) to remove any unhealthy amoeba. Finally, A. castellanii trophozoites suspension was centrifuged at 2500 × g for 10 min to obtain amoeba pellet. The pellet was resuspended in 1 mL PBS, and the population of A. castellanii was determined by cell counting using a hemocytometer. 5 × 10 5 A. castellanii were used for amoebicidal and encystation assays.
Henrietta lacks cervical adenocarcinoma (HeLa) cells culture. HeLa cells were cultured in Roswell
Park Memorial Institute (RPMI)−1640 supplemented with 10% fetal bovine serum (FBS), 10% Nu-serum, 2 mM glutamine, 1 mM pyruvate, penicillin and streptomycin (100 units/mL and 100 μg/mL respectively), non-essential amino acids, and vitamins to obtain uniform monolayers of cells in 75-cm 2 culture flasks as described previously 28 . Old media was aspirated, and cells were trypsinized with 2 mL trypsin. The cell suspension was centrifuged for 5 min at 2000 × g, and cell pellet was resuspended in 30 mL fresh cell growth media. 200 μL of this cell suspension was seeded in each well of a 96-well plate and the plate was incubated at 37 °C in a 5% CO 2 incubator with 95% humidity for at least 24 h until formation of uniform monolayer of HeLa cells. These were used for N. fowleri cultures, cytotoxicity, and cytopathogenicity assays.
N. fowleri cultures. N. fowleri (ATCC 30174) a clinical isolate from the cerebrospinal fluid of a patient was cultured in 75-cm 2 tissue culture flasks containing HeLa monolayers as feed. N. fowleri was grown at 37 °C in a 5% CO 2 incubator with 95% humidity as described previously 21 .
Amoebicidal assay. Bactericidal assay. Antibacterial potential of nanoparticles and respective controls was determined by using bactericidal assay as described previously 30 . Briefly, bacterial cultures were fixed to an optical density of 0.22 at 595 nm using a spectrophotometer (OD 595 = 0.22) which is equivalent to 10 8 colony-forming units per mL (C.F.U. mL −1 ). An inoculum of 10 μL of above bacteria culture (corresponding approximately 10 6 C.F.U.) was incubated with various concentrations of GA-AgNPs-HDN, GT-AuNPs-NRG and respective controls in 1.5 mL centrifuge tubes at 37 °C for 2 h. For negative controls untreated bacterial culture were incubated with phosphate buffer www.nature.com/scientificreports www.nature.com/scientificreports/ saline (PBS), while 100 μg/mL gentamicin treated bacteria were used as positive control. Next, bacteria were serially diluted and 10 µL of each dilution was plated on nutrient agar plates. These plates were incubated at 37 °C overnight, followed by counting viable bacterial C.F.U.
pathogens-mediated host cells cytotoxicity. The cytopathogenicity assay was carried out as reported previously 31 . 5 × 10 5 A. castellanii, 10 6 C.F.U. of each E. coli K1 and MRSA were incubated with HDN and NRG loaded nanoparticles with respective controls at different concentrations for 2 h at 30 °C. Next, the microbial cultures were centrifuged at 2500 × g for 10 minutes, and supernatants were wasted to remove extracellular materials. The pellet obtained was resuspended in 500 µL of fresh RPMI-1640 which was put on another 24-well plated with HeLa cells monolayer. Cells were incubated at 37 °C in a 5% CO 2 incubator with 95% humidity for 24 h. Finally, supernatants were collected from each well and lactate dehydrogenase (LDH) cytotoxicity assay was performed using LDH kit (Roche) as described previously 23 . The extent of LDH release determines cells damage. Untreated cells were considered as negative control, whereas cells incubated with 0.1% Triton X-100 for 20 min gave maximum LDH release as a result of cell lysis which was taken as positive control. The % cell cytotoxicity was calculated as follows: % cell cytotoxicity = (sample absorbance − negative control absorbance)/(positive control absorbance − negative control absorbance) × 100. The results are representatives of several experiments presented as the mean ± standard error. Cytotoxicity assay. To evaluate the cytotoxic effects of these nanoparticles on human cell, LDH cytotoxicity assay was performed as reported previously 28 . Briefly, 100 µg per mL concentrations of HDN and NRG loaded nanoparticles and respective controls were treated with uniform monolayer of HeLa cells in a 24-well plate, and the cells were incubated for 24 h at 37 °C in a 5% CO 2 incubator. After 24 h, supernatants were collected from each well and cytotoxicity was determined by measuring lactate dehydrogenase (LDH) released by using LDH kit (Roche). Untreated cells were considered as negative control, whereas cells incubated with 0.1% Triton X-100 for 20 min gave maximum LDH release as a result of cell lysis which was taken as positive control. The percentage cell cytotoxicity was calculated as follows: % cell cytotoxicity = (sample absorbance − negative control absorbance)/ (positive control absorbance − negative control absorbance) × 100. The results are representatives of several experiments presented as the mean ± standard error. statistical analysis. Student T test was used to measure statistical correlation and significance. P < 0.05 was the limit for significance using two-sample T test and two-tailed distribution. *Represents P < 0.05, **represents P < 0.01, while ***represents P < 0.001.
Results
Characterization of GA-AgNPs and GA-AgNPs-HDN for determination of size, PDI, zeta potential and surface morphology. Size and shape of the nanoparticles have a key importance in the design of drug delivery systems. Smaller the size of the nanoparticles greater will be the surface to volume ratio, therefore the chances of interactions between the bioactive drug molecules and the nanoparticles increases which ultimately enhances the therapeutic efficacy of the drug 32 . GA-AgNPs possess 107.1 ± 2.56 nm mean size with PDI 0.270 ± 0.03. Similarly, GA-AgNPs-HDN exhibit 182.8 ± 1.02 nm and PDI 0.422 ± 0.01 results are shown in Fig. 1A,B. Larger mean size and PDI of the HDN loaded nanoparticles than that of their unloaded analogue is attributed to unequal scattering of the drug moieties over the surface of GA-AgNPs. Zeta potential of GA-AgNPs and GA-AgNPs-HDN was −18.6 ± 0.54 mV and −19.1 ± 1.34 mV respectively as shown in Fig. 1C,D. Zeta potential is another important parameter for the determination of nano-carrier stability. AFM and TEM of GA-Ag-NPs and GA-Ag-NPs-HD were investigated to find their morphology. Both nanoparticles found to be spherical in Fig. 2E. The FT-IR spectrum of HDN revealed its distinctive peaks at 3466.1and 2925.50 cm −1 assign to OH and C-H groups of HDN. The peak for C=O, C = C and aromatic C=C appeared at 1645.27, 1515.9 and 1443.0 cm −1 respectively (Fig. 2F). The FT-IR spectrum of GA-AgNPs-HDN shows all the representative peaks at their respective places with slight changes in absorbance. FT-IR analysis confirms the chemical stability of HDN as all peaks are present on their respective positions in the spectrum shown in Fig. 2F. The drug loading % was found to be 73.66%.
Characterization of GT-AuNPs and GT-AuNPs-NRG for determination of size, PDI, zeta potential and surface morphology. GT-AuNPs and GT-AuNPs-NRG exhibit 183 ± 1.04 and 221 ± 1.08 nm mean size with PDI 0.351 ± 0.02 and 0.410 ± 0.03 respectively which are shown in Table 1. The size of the GT-AuNPs as compare to NRG loaded counterpart is relatively larger which confirms the NRG loading on the surface of GT-AuNPs. Zeta potential of GT-AuNPs is −34.1 ± 0.1 and for GT-AuNPs-NRG is −27.6 ± 0.5 mV respectively. The drug loading % was found to be 72%. Surface morphology of both GT-AuNPs and GT-AuNPs-NRG were Table 1. The average size, polydispersity index (PDI), and zeta potential of GT-AuNPs and GT-AuNPs-NRG with the % NRG loading efficiency.
www.nature.com/scientificreports www.nature.com/scientificreports/ investigated through AFM and TEM, and they were found to be spherical in shape as shown in Fig. 3A-D. shows a representative surface plasmon resonance band of GT-AuNPs-NRG with maximum absorbance at 540 nm. -HDN abolished viability of A. castellanii and N. fowleri. Amoebicidal assay revealed that GA-AgNPs-HDN killed all the A. castellanii trophozoites at 50 µg per mL, and significantly reduced the number of cells by 90% at 25 µg per mL as compared to GA alone, HDN alone, and GA-AgNPs (Fig. 4A). Notably GA-AgNPs-HDN was found to be more effective than positive control Chlorhexidine. Whereas, GT-AuNPs-NRG did not produce cidal effects when statistically compared with GT alone, NRG alone and GT-AuNPs. On the other hand, both GA-AgNPs-HDN and GT-AuNPs-NRG exhibited significant amoebicidal effects against N. fowleri as compared to gums alone, drugs alone and gums stabilized nanoparticles (Fig. 4B). GA-AgNPs-HDN caused 99% reduction in N. fowleri viability at 25 µg per mL which is also significantly more effective than Amphotericin B alone. These results suggest that GA-AgNPs-HDN is an exceptional formulation which hold potential for further studies.
GA-AgNPs-HDN and GT-AuNPs-NRG inhibited encystment and excystation of A. castellanii.
As encystment of A. castellanii is responsible for the resistance against drugs, these nanoparticles were tested for inhibition of encystation. GA-AgNPs-HDN and GT-AuNPs-NRG significantly inhibited the encystation of A. castellanii at 100 µg per mL as compared to respective controls (Fig. 5A). GA-AgNPs-HDN caused 95% inhibition and GT-AuNPs-NRG inhibited the encystation by 85%. The de-differentiation of cysts into trophozoites causes recurrence of infection in most of the cases. Therefore, the effects of GA-AgNPs-HDN and GT-AuNPs-NRG were also evaluated against excystation. While treated with pre-formed mature cysts of A. castellanii, GA-AgNPs-HDN inhibited excystation by 84% at 100 µg per mL (Fig. 5B). Contrary, GT-AuNPs-NRG did not exhibit significant excystation when compared with GT-AuNPs. Since most of the lead compounds and drugs have limited effects against cysts of A. castellanii, these nanoparticles showed consistent effects against trophozoite as well as resistant cyst stage. www.nature.com/scientificreports www.nature.com/scientificreports/ GA-AgNPs-HDN exhibited significant bactericidal effects. Figure 6 represents the bactericidal effects of GA-AgNPs-HDN and GT-AuNPs-NRG tested at 50 and 0.5 µg per mL against MRSA and E. coli K1. GA-AgNPs-HDN showed significant bactericidal activity at 50 µg per mL against MRSA (Fig. 6B), and 0.5 µg per mL against E. coli K1 (Fig. 6D). GT-AuNPs-NRG did not exhibit bactericidal effects at 50 µg per mL against both tested bacteria (Fig. 6C,F). Figure 7 presents the corresponding field emission scanning electron microscopic (FE-SEM) analysis of bacteria before and after treatment with GA-AgNPs-HDN.
GA-AgNPs-HDN reduced the pathogens-mediated host cell cytotoxicity. The pretreatment of
A. castellanii and E. coli K1 with GA-AgNPs-HDN resulted in significant reduction of their cytopathogenicity against human cells. Figure 8A describes that untreated A. castellanii caused more than 80% cell cytotoxicity against HeLa cells, contrary, GA-AgNPs-HDN (50 µg per mL) completely obliterated the host cells cytotoxicity as compared to relative controls. Similarly, the pretreatment of 0.5 µg per mL GA-AgNPs-HDN with E. coli K1 abolished the cytotoxicity of bacterium against HeLa cells as compared to untreated E. coli K1 which exhibited 74% cytotoxicity (Fig. 8B).
GA-AgNPs-HDN and GT-AuNPs-NRG showed minimal cytotoxicity against human cells. When
tested against human cells, all test samples showed minimal cytotoxic effects (Fig. 9). GA-AgNPs-HDN exhibited only 11% cytotoxicity while GT-AuNPs-NRG caused 23% cytotoxicity against HeLa at a higher concentration of 100 µg per mL as compared to amoebicidal effects which were recorded at 50 and 25 µg per mL. The cytotoxicity profile against human cells suggest that these nanoparticles are biosafe and can further be evaluated for potential in in vivo studies. 50 . castellanii and (B) N. fowleri. The viability of amoebae was determined after amoebicidal assay as described in the materials and methods section. Briefly, A. castellanii or N. fowleri trophozoites were incubated with GA and GT alone, AgNPs alone, AuNPs alone, HDN alone, NRG alone, GA-AgNPs, GT-AuNPs, GA-AgNPs-HDN, and GT-AuNPs-NRG and negative and positive controls at 50 and 25 µg per mL at 30 °C for 24 h. Next, the viability was measured by Trypan blue exclusion assay. The results are presented as the mean ± standard error of various experiments performed in duplicate. *Represents P < 0.05, **represents P < 0.01, while ***represents P < 0.001. P values were obtained using two-sample T test and two-tailed distribution.
A. castellanii (1 × 10 5 ) were inoculated in PBS in the presence of GA-AgNPs-HDN and GT-AuNPs-NRG and respective controls at 100 µg per mL with encystation media and incubated at 30 °C for 72 h. Next, 0.25% sodium dodecyl sulfate (SDS) was added and incubated at room temperature for 10 min to lyse A. castellanii trophozoites followed by enumeration of amoebae cysts using a hemocytometer. (B) Excystation assays was performed by incubating GA-AgNPs, GT-AuNPs, GA-AgNPs-HDN, GT-AuNPs-NRG and respective controls (100 µg per mL) with A. castellanii cysts (1 × 10 5 ) in growth medium, PYG at 30 °C for 72 h. After this period, amoebae were counted using a hemocytometer. The results are presented as the mean ± standard error of various experiments performed in duplicate. *Represents P < 0.05, **represents P < 0.01, while ***represents P < 0.001. P values were obtained using two-sample T test and two-tailed distribution. GT-AuNPs-NRG shows no antibacterial activity (F). The results are presented as the mean ± standard error of various experiments performed in duplicate. *Represents P < 0.05, **represents P < 0.01, while ***represents P < 0.001. P values were obtained using two-sample T test and twotailed distribution. www.nature.com/scientificreports www.nature.com/scientificreports/
Discussion
Brain-eating amoebae are opportunistic protist pathogens associated with diseases of fatal severity. The molecular pathways to target these microbes are limited which results in challenges in development of effective therapeutics 4 . Current management and treatment are unspecific and ineffective due to which the CNS infections caused by brain-eating amoebae almost always proved to be deadly 33 . Furthermore, the clinical procedures suffer from limitations including long term use of medications (a mixture of drugs including biguanides, azoles, amidines, antibiotics) and still the chances of recurrence are high 34 . On the other hand, ever growing drug resistance in most commonly present bacteria, and lack of newer and improved antimicrobial agents pose serious challenges to healthcare systems 35 . Therefore, there is an urgent need to develop novel, sustainable, and effective modalities of chemotherapeutics against infectious diseases. Nanotechnology has proved to be a model alternative to target infectious diseases 36 . Due to small size of nanomaterials, these are efficient drug delivery carriers for minimizing the pharmacokinetics and pharmacodynamics limitations of compounds and drugs known to have medicinal values 37 . Flavonoids are important nutraceutical and biologically active class of secondary metabolites natural products. Flavonoids obtained from citrus fruit plants are rich of drug candidates against a variety of diseases such as; infectious diseases, cancer, neurodegenerative etc. [38][39][40][41] . However, their clinical applications have some common shortcomings, and their poor bioavailability is one of the major factors 14 . In this study, we synthesized silver and gold nanoparticles stabilized with plant gums and loaded them with two most common citrus fruits flavonoids HDN and NRG to utilize their antimicrobial activity against brain-eating parasites A. castellanii and N. fowleri and multi-drug resistant bacteria MRSA and neuropathogenic E. coli K1. °C in a 5% CO 2 incubator as described in materials and methods section. Next, cell-free supernatant was collected, and cytotoxicity was determined using Lactate dehydrogenase (LDH) assay kit (Roche). (B) E. coli K1 caused 74% cytotoxicity to HeLa cells. Upon pretreatment with 0.5 µg per mL GA-AgNPs-HDN, the host cells cytotoxicity was reduced to 1%. The results are presented as the mean ± standard error of various experiments performed in duplicate. *Represents P < 0.05, **represents P < 0.01, while ***represents P < 0.001. P values were obtained using two-sample T test and two-tailed distribution.
0% 25% 50% 75% 100% Cytotoxicity against human cells Figure 9. GA-AgNPs-HDN and GT-AuNPs-NRG did not exhibit cytotoxicity against HeLa cells at 100 µg per mL. These nanoparticles and the respective controls were incubated at 30 °C with HeLa cells monolayer for 24 h at 37 °C in a 5% CO 2 incubator. Following this incubation, cell-free supernatant was collected, and cytotoxicity was determined using Lactate dehydrogenase (LDH) assay kit (Roche www.nature.com/scientificreports www.nature.com/scientificreports/ Green synthesis of nanoparticles involves the reduction of metal ions by using environmentally and eco-friendly materials which act as reducing and stabilizing agents. Microorganisms and plant materials have been widely used for the biosynthesis of nanoparticles 42 . Green synthesized nanoparticles have been extensively used against microbial diseases, however, only few results are reported against parasitic diseases 43 . Besides metallic nanoparticles, green polymers including cellulose and starch have also been used for clinical and biomedical applications including bones healing and substitution [44][45][46] . The synthesis and stabilization of nanoparticles is dependent on the reducing and capping ability of the material used. In this study, the reduction of silver and gold is accomplished by using biocompatible natural gums; GA and GT. This green approach is being exploited for the formation of nanoparticles by avoiding any toxic reducing agent and harsh temperature conditions 17 . These nanoparticles were thoroughly characterized by various instrumental techniques before subjected to biological evaluation against brain-eating amoebae. The role of Toll-like receptors (TLRs) in innate immune responses to pathogens is well recognized 47,48 . Our previous study showed that, HDN is known to reduce the expression of mRNA in TLRs which as a result reduce inflammation 17 . As the TLRs can influence the immunopathogenesis of CNS parasitic infections, we proposed that using TLRs targeting compounds can decrease the activity of inflammatory cytokines which may affect the parasite clearance and host survival. However, upon loading of HDN on GA-AgNPs caused surprisingly drastic amoebicidal effects, the mechanism of which is yet unknown. On the other hand, NRG acts as inhibitor of cytochrome P450 49 , which is known to be a common pathway associated with antimicrobial mode of action against brain-eating amoebae 50 . In our previous report, we showed the antibacterial effects of GT-AuNPs-NRG against a variety of bacteria, however their IC 50 values were high (in the range of 250-300 µg per mL) 26 . GA-AgNPs-HDN are found to be more potent which showed significant bactericidal effects at 50 and 0.5 µg per mL against MRSA and E. coli K1 respectively. Interestingly, Gram-negative E. coli K1 which have additional peptidoglycan cell wall as compared to Gram-positive bacteria is found to be more susceptible to GA-AgNPs-HDN. The mode of action of such potent antimicrobial effects of these nanoparticles however is still to be determined.
Conclusions
The green synthesis of silver and gold nanoparticles stabilized with natural glycosidic polymers of plant gums (gum acacia and gum tragacanth) was achieved. These nanoparticles were further loaded with citrus fruits flavonoids HDN and NRG to obtain GA-AgNPs-HDN and GT-AuNPs-NRG. The nanoparticles were characterized by zetasizer, zetapotential, AFM, UV-vis spectrophotometric, and FT-IR analyses. GA-AgNPs-HDN and GT-AuNPs-NRG were tested against brain-eating amoebae A. castellanii and N. fowleri, as well as multi-drug resistant bacteria MRSA and neuropathogenic E. coli K1. These nanoparticles exhibited potent amoebicidal and bactericidal effects, and also inhibited the encystation and excystation processes of A. castellanii. Furthermore, these nanoparticles significantly reduced the pathogens-mediated host cells cytotoxicity. Interestingly, these nanocarriers did not show cytotoxicity against human cells even at higher concentration as compared to their concentration used for antimicrobial effects. This study demonstrates a potential development of effective antimicrobial nano-formulations based on naturally occurring flavonoids. These results are anticipated to be a major step forward in developing efficient nanomedicine against pathogenic microbes including brain-eating amoebae and bacterial infections. The mechanism of action and in vivo studies are part of our future research.
Data Availability
Data will be provided upon request on case to case basis. | 5,118.4 | 2019-02-28T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Chemistry",
"Materials Science"
] |
A hollow sphere as a detector of gravitational radiation
The most important features of the proposed spherical gravitational wave detectors are closely linked with their symmetry. Hollow spheres share this property with solid ones, considered in the literature so far, and constitute an interesting alternative for the realization of an omnidirectional gravitational wave detector. In this paper we address the problem of how a hollow elastic sphere interacts with an incoming gravitational wave and find an analytical solution for its normal mode spectrum and response, as well as for its energy absorption cross sections. It appears that this shape can be designed having relatively low resonance frequencies (about 200 Hz) yet keeping a large cross section, so its frequency range overlaps with the projected large interferometers. We also apply the obtained results to discuss the performance of a hollow sphere as a detector for a variety of gravitational wave signals.
The most important features of the proposed spherical gravitational wave detectors are closely linked with their symmetry. Hollow spheres share this property with solid ones, considered in the literature so far, and constitute an interesting alternative for the realization of an omnidirectional gravitational wave detector. In this paper we address the problem of how a hollow elastic sphere interacts with an incoming gravitational wave and find an analytical solution for its normal mode spectrum and response, as well as for its energy absorption cross sections. It appears that this shape can be designed having relatively low resonance frequencies (∼ 200 Hz) yet keeping a large cross section, so its frequency range overlaps with the projected large interferometers. We also apply the obtained results to discuss the performance of a hollow sphere as a detector for a variety of gravitational wave signals. 04.80.Nn, 95.55.Ym
I. INTRODUCTION
Thirty-five years after the beginning of the experimental search for cosmic gravitational waves (GW), several resonant-mass detectors (cryogenic cylindrical bars) are currently monitoring the strongest potential sources in our Galaxy and in the local group [1]. The sensitivity of such detectors is h ≃ 6 × 10 −19 for millisecond GW bursts, or, in spectral units, 10 −21 Hz −1/2 over a bandwidth of a few Hz around 1 kHz. A further improvement in sensitivity and bandwidth is expected from the operation at ultralow temperatures of the two bar detectors NAUTILUS [2] and AURIGA [3] in Italy, and even better sensitivities and bandwidths will come about as more advanced readout systems are developed. Projects for spherical resonant-mass GW detectors have emerged in the last few years in the resonant-mass community [4][5][6][7], due to their remarkable advantages with respect to the operating bars [8].
In a cylindrical bar only the first longitudinal mode of vibration interacts strongly with the GW, and consequently only one wave parameter can be measured: the amplitude of a combination of the two polarization states. On the other hand each quadrupole mode of a spherical mass is five-fold degenerate (its angular dependence is described in terms of the five spherical harmonics Y lm (θ,ϕ) with l=2 and m = −2,...,2), and presents an isotropic cross section. The cross section of the lowest order (n=1) mode is the highest, and is larger than that of a cylindrical antenna made of the same material and with the same resonant frequency by a factor of about 0.8 (R s /R b ) 2 [6,7], where R s and R b are the radius of the sphere and of the bar, respectively. This means a factor of 20 over present bars. Moreover, the sphere's cross section is also high at its second quadrupole harmonic.
The five-fold degeneracy of the quadrupole modes enables the determination of the GW amplitudes of two polarization states and the two angles of the source direction. The method first outlined by Forward [9] and later developed by Wagoner and Paik [10], consists in measuring the sphere vibrations in at least five independent locations on the sphere surface so as to determine the vibration amplitude of each of the five degenerate modes. The Fourier components of the GW amplitudes at any quadrupole frequencies and the two angles defining the source direction can be obtained as suitable combinations of these five outputs [5,6,8,11,12].
The signal deconvolution is based on the assumption that in the wave frame (that in which the z axis is aligned with the wave propagation direction) only the l=2 and m=±2 modes are excited by the GW, as the helicity of a GW is 2 in General Relativity. One can take advantage of this to deconvolve the wave propagation direction and the GW amplitudes in the wave frame.
Most of the nice properties of a spherical GW detector depend on its being spherically symmetric. A spherical shell, or hollow sphere, obviously maintains that symmetry, so it can be considered an interesting alternative to the usual solid sphere. In order to have a good cross section, a resonant GW detector must be made of a high speed of sound material, and have a large mass. The actual construction of a massive spherical body may be technically difficult. In fact, fabricating a large hollow sphere is a different task than fabricating a solid one. Casting a hollow half sphere is a nearly two dimensional cast, at odds with casting a solid sphere, which requires rather special moulds. As an example of the feasibility of large two dimensional casting we can mention the fabrication of propellers of more than 10 meters in size and masses of the order of 100 tons [13]. Two hollow hemispheres could then be welded together with electron beam techniques. However, while it is known that these welding technique preserve most of the properties of the bare material, its effect on the acoustic quality factor (a relevant paramenter in resonant mass detectors) must be further studied.
We have investigated the properties of a hollow sphere as a potential GW antenna. The purpose of this paper is to present a detailed report of the main results of such an investigation, and to discuss the real interest of this new detector shape.
In section 2 we present the complete analytical solution of the eigenmode problem for a hollow sphere of arbitrary thickness, including the full frequency and amplitude spectrum. Section 3 is devoted to the cross section analysis, while in section 4 we take up the study of the system sensitivity to various GW signal classes. Finally, we present an outlook and summary of conclusions in section 5.
II. NORMAL MODES OF VIBRATION AND EIGENFREQUENCIES OF A HOLLOW SPHERE
In this section we consider the problem of a hollow elastic sphere in order to obtain its normal modes and frequency spectrum. This is a classical problem in Elasticity theory which was posed and partly addressed already in the last century, see e.g. [14] and references therein.
Let R and a be the outer and inner radius of the sphere, respectively. The elastic properties of the sphere, provided it is homogenous and isotropic, will be described by its Lamè coefficients, λ and µ, and its density, ρ. As is well known (see, e.g., [8]), the normal modes are obtained as the solutions to the eigenvalue equation subject to the boundary conditions that the solid's surface be free of any tensions and/or tractions; these are expressed by the equations σ ij n j = 0 at r = R and at r = A (R ≥ a ≥ 0), (2.2) where the sphere's surface S has outward normal n. The possibility of a spherical shell (a = R), and that of a solid sphere (a = 0), are allowed. The stress tensor σ ij is given by [8] σ ij = λ u k,k δ ij + 2 µ u (i,j) . (2. 3) The general solution to (2.1) can be cast in the form where C i , D i are constants, L ≡ −ix × ∇, and the scalar functions φ, ψ,φ,ψ are given by where q ≡ k µ/(λ + µ), and Y lm denotes a spherical harmonic. Finally, j l and y l are the standard Bessel functions of the first and second kind, respectively (see, e.g., [15]). The latter (which are singular at the origin) must be included in our case, as r = 0 lies outside the boundary S. The boundary conditions (2.2) become, after rather lengthy calculations, a system of linear equations which splits up into a 4 × 4 linear system for (C o , C 2 , D o , D 2 ), and a 2 × 2 system for (C 1 , D 1 ). That is, we have a linear system of the form: where the superscript t denotes transpostion, and the corresponding matrices are: and Here s ≡ q/k, and we have introduced the set of functions: while the tilded ones are their singular counterparts, with y l instead of j l (i.e.,β o (z) ≡ y l (z) z −2 , and so on). The matrices A P and A T are functions of kR, and depend on the parameter a/R, and, in the case of A P , also on s 1 . The discrete set of kR values that make compatible the system (2.7) constitute the spectrum of the elastic sphere. We can distinguish two families of normal modes: (i) Toroidal modes. These are characterized by Hence they are purely tangential, and their frequencies depend only on the ratio a/R. Their amplitudes are where C 1 (n, l) is fixed by the chosen normalization. The corresponding eigenvalues are obtained as solutions to the trascendental equation (2.13). For the degenerate limit a = R the equation to be solved is with the prime denoting differentation respect to the argument. Using standard properties of Bessel functions [15], it can be easily shown that and, in this case, there is only one eigenvalue for each l > 1, given by the only root of the above equation, (k l R) 2 = l(l + 1) − 2 2 . Figure 1 displays k nl R as a function of a/R for the first few toroidal modes. The existence of just one mode for each l > 1 in the thin shell limit shows as a divergence of k nl R when a/R approaches 1 and n > 1. In figure 2 we plot the normalized toroidal amplitudes T nl (r) for two quadrupolar modes and three different values of the parameter a/R. We observe that their absolute values at the outer surface show little dependence on the ratio a/R. (ii) Spheroidal modes. This second family is charaterized by: In this case, the expressions get more involved, as we have to handle a 4 × 4 determinant. Once the spectrum k nl is found for given a/R and s, the system (2.7) can be solved for C 2 /C o , D o /C o , and D 2 /C o . If we label these coefficients p o (n, l), p 1 (n, l), p 2 (n, l), the eigenmodes can be written as where C o (n, l) is, again, free up to normalization. The spectrum for the degenerate case a = R is given by the solutions to which happens to have two solutions for each value of l when l > 1 and only one root for l < 2 3 . Plotting k nl R as a function of a/R, we see that the third and higher roots diverge as the inner radius approaches R, see figures 3 and 4. Figures 5-7 show the normalized radial functions for a few spheroidal modes and values of a/R. As in the toroidal case, their values at r = R (where measurements using transducers are to be made eventually) are nearly independent of a/R.
III. CROSS SECTION FOR THE HOLLOW SPHERE
A convenient way to characterise a resonant detector sensitivity is through its GW energy absorption cross section, defined as where ∆E a (ω) is the energy absorbed by the detector at frequency ω, and Φ(ω) is the incident flux density expressed e.g. in watt/m 2 Hz. Estimation of σ abs (ω) requires a hypothesis about the underlying gravitation theory to calculate Φ(ω), and specification of the antenna's geometry to calculate ∆E a (ω). Here we shall assume that General Relativity and has only one solution, namely qR = (µ/λ) 3 − µ/λ. Unlike toroidal eigenvalues, spheroidal ones do depend on µ/λ. is the correct gravitation theory, and proceed to calculate the oscillation energy of the solid as a consequence of its excitation by an incoming GW, which we shall naturally identify with ∆E a (ω). We briefly sketch the details of the process now.
As shown in [8], an elastic solid's response to a GW force can be expressed by a very general formula, which is easily particularised to a spherically symmetric body such as the solid sphere or the hollow sphere. In both cases, as we have just seen, the vibration eigenmodes belong into two families (spheroidal and toroidal), but GWs only couple to quadrupole spheroidal harmonics. If the frequencies of these modes are noted by ω n2 (n = 1 for the lowest value, n = 2 for the next, etc.) and the corresponding wavefunctions by u n2m (x) then the elastic displacements are given by and g (m) (t) are the quadrupole components of the Riemann tensor, while b n is an overlapping integral factor of the GW's tidal coefficient over the solid's extension. Much like in the case of a solid sphere, it has dimensions of length, and is given by a definite integral of the radial terms in the wavefunction u n2m (x); more specifically, where we have introduced the dimensionless function and assumed the following normalization for the wavefunctions: Values are referred to the cross section of a solid sphere in its first quadrupole resonance, whose radius is assumed to be equal to the outer radius of the hollow sphere.
where T is the integration time of the signal in the detector. The energy deposited by the GW in the n-th quadrupole mode is hence calculated by integration of this spectral density over the linewidth of the mode. It is readily found that where G (m) (ω) is the Fourier transform of g (m) (t). The GW flux in the denominator of (3.1) is (clearly) proportional to the sum in the rhs of (3.8), the proportionality factor being in turn proportional to ω 2 -see [8] for a detailed discussion-, so we finally obtain where v 2 t = µ/ρ, M is the detector's mass, and G is Newton's constant. This equation allows relatively easy numerical evaluation of the cross sections, as well defined computer programmes can be written for the purpose.
As we have seen in section 2 above, the eigenvalues and wavefunctions of a hollow sphere only depend on the ratio a/R, and therefore so does the quantity (k n2 b n ) in (3.9). So the cross section σ n only depends on that ratio, too, once a suitable unit of mass is adopted for reference. In figures 8 and 9 we plot σ n for the first two quadrupole modes of the hollow sphere in two different circumstances: in figure 8 we assume a hollow sphere of fixed outer radius -thus its mass decreases with thickness-, and in figure 9 we have instead assumed that the mass of the hollow sphere is fixed , so that its geometrical size increases as it gets thinner. In either case we see that, for the higher mode, the maximum cross section does not happen at a=0, but at some intermediate inner radius: for a ≈ 0.37745R, the cross-section for the second quadrupole mode equals that of the first, and we have the possibility of working with a detector with the same (high) sensitivity at two frequencies.
IV. SENSITIVITY TO GW SIGNALS
We assume that the mechanical oscillations induced in a resonant mass by the interaction with the GW are transformed into electrical signals by a set of identical noiseless transducers (for the sake of simplicity, we consider here non-resonant transducers), perfectly matched to electronic amplifiers with noise temperature T n . Unavoidably, Brownian motion noise associated with dissipation in the antenna and electronic noise from the amplifiers limit the sensitivity of the detector. We refer the reader to [16][17][18] for a complete discussion on the sensitivity of resonant-mass detectors and report here only a few basic formulas for the evaluation of the detector sensitivity to various signals.
The total noise at the output of each resonant mode can be seen as due to an input noise generator having spectral density of strain S h (f ), acting on a noiseless oscillator. S h (f ) represents the input GW spectrum that would produce a signal equal to the noise spectrum actually observed at the output of the detector instrumentation. In a resonantmass detector, this function is a resonant curve and can be characterized by its value at resonance S h (f n ) and by its half height width. S h (f n ) can be written as: 4kT e σ n Q n f n (4.1) Here T e is the thermodynamic temperature of the detector plus a back-action contribution from the amplifiers, and Q n is the quality factor of the mode.
The half height width of S h (f ) gives the bandwidth of the resonant mode: Here, Γ n is the ratio of the wideband noise in the n-th resonance bandwidth to the narrowband noise, where β n is the transducer coupling factor, defined as the fraction of the total mode energy available at the transducer output.
In practice Γ n ≪ 1 and the bandwidth is much larger than the pure resonance linewidth f n /Q n . In the limit Γ n → 0, the bandwidth becomes infinite. The bandwidth of the present resonant bars is of the order of a few Hz [1]. If a quantum limited readout system were available, values of the order of 100 Hz could be reached [19,20].
The equations (4.1 and 4.2) can be used to characterize the sensitivity of the quadrupole modes of a hollow spherical resonant-mass detector. The optimum performance is obtained by filtering the output with a filter matched to the signal. The energy signal-to-noise ratio (SN R) of the filter output is given by the well-known formula where H(f ) is the Fourier transform of h(t).
We now report the SNR of a hollow spherical detector for various GW signals. To be specific, we shall assume that the thermodynamic temperature of the detector can be reduced to below 50 mK, and that the quality factors of the modes are of the order of 10 7 , so that the overall detector noise will be dominated by the electronic amplifier noise. If we express the energy of the latter as a multiple of the quantum limit , i.e., kT n = Nhω then the strain spectral density becomes In these conditions the fractional bandwidth ∆f n /f n becomes of the order of β n that we assume of about 0.1. We shall consider hollow spheres made of the usual aluminium alloy Al 5056 and of a recently investigated copper alloy (CuAl) [21]. Table I displays numerical values of the most relevant parameters for a few example detectors with a noise level equal to the quantum limit, i.e. N = 1.
A. Bursts
We model the burst signal as a featureless waveform, rising quickly to an amplitude h 0 and lasting for a time τ g much shorter than the detector integration time ∆t = ∆f −1 n . Its Fourier transform will be considered constant within the detector bandwidth: For SNR = 1, and using the equation H min 0 = h min 0 τ g , we find The level h min 0 ≃ 10 −22 can be reached by the lowest order mode of a typical large hollow spherical detector such as the one being considered. The GW luminosity of burst sources is still largely unknown, so it is difficult to accurately estimate their detectability. The above sensitivity is however likely to enable the detection of GW collapses in the Virgo cluster for an energy conversion of 10 −4 M ⊙ into a millisecond GW burst. See table II for a few specific examples.
B. Monochromatic signals
We consider a sinusoidal wave of amplitude h 0 and frequency f s constant over the observation time t m . The Fourier transform amplitude at f n is 1 2 h 0 t m with a bandwidth given by t −1 m . The SNR can be written as
C. Chirps
We consider here the interaction of the hollow spherical detector with the waveform emitted by a binary system, consisting of either neutron stars or black holes, in the inspiral phase. The system, in the Newtonian regime, has a clean analytic behaviour, and emits a waveform of increasing amplitude and frequency that can sweep up to the kHz range of frequency.
From the resonant-mass detector viewpoint, the chirp signal can be treated as a transient GW, depositing energy in a time-scale short with respect to the detector damping time [23]. We can then use (4.6) to evaluate the SNR, where the Fourier transform H(f n ) at the resonant frequency f n can be explicitly written as with h(t) indicating h + (t) or h × (t). Substituting into (4.10) the well-known chirp waveforms for an optimally oriented orbit of zero eccentricity in the Newtonian approximation [18], the SNR for chirp detection is [24]: M c is the chirp mass defined as M c = (m 1 m 2 ) 3/5 (m 1 + m 2 ) −1/5 , where m 1 and m 2 are the masses of the two compact objects and r is the distance to the source. The chirp mass is the only parameter that determines the frequency sweep rate of the chirp signal in the Newtonian approximation, and can be determined by a double passage technique [24]: much like in a solid sphere detector, one can measure the time delay τ 2 − τ 1 between excitations of the first and second quadrupole modes on a hollow spherical detector to calculate the chirp mass through equation where ω 1 and ω 2 are the angular frequencies of the first and second quadrupole modes, respectively. Time delays are of the order of a fraction of a second for the hollow spheres considered in this paper, well within the timing possibilities of resonant mass detectors [25] Another consequence of the multimode and multifrequency nature of a spherically symmetric detector is the possibility to determine the orbit orientation by the measurement of the relative proportion of the two polarisation amplitudes, and thereby the distance to the source and the intrinsic GW amplitudes [24]. See figs. 10 and 11 for a specific example referring to optimally oriented circular orbits. Because of the Newtonian approximation, eqs. (4.11) and (4.12) become inaccurate near coalescence. In analogy with previous analyses [23,24], we limit our considerations to the frequency at which there are still five cycles remaining in the waveform until coalescence. The highest chirp mass values reported in the figures are determined by the requirement that the five-cycle-frequency of the source be larger than the resonant frequencies of the detector.
D. Stochastic background
In this case h(t) is a random function and we assume that its power spectrum, indicated by S gw (f ), is flat and its energy density per unit logarithmic frequency is a fraction Ω gw (f ) of the closure density ρ c of the Universe: = Ω gw ρ c (4.13) S gw (f ) is given by 14) The measured noise spectrum S h (f ) of a single resonant-mass detector automatically gives an upper limit to S gw (f ) (and hence to Ω gw (f )).
Two different detectors with overlapping bandwidth ∆f will respond to the background in a correlated way. The SNR of a GW background in a cross correlation experiment between two detectors located near one another and having a power spectral density of noise S 1 h (f ) and S 2 h (f ) is [26]: where t m is the total measuring time.
Detectors located some distance apart do not correlate quite so well because GWs coming from within a certain cone about the line joining the detectors will reach one of them before the other. The fall-off in the correlation with separation is a function of the ratio of the wavelength to the separation, and has been studied for pairs of bars, pairs of interferometers [27,28] and pair of spherical detectors [29]. Assuming two identical large hollow spherical detectors are co-located for optimum correlation, the background will reach a SNR = 1 if Ω gw is Ω gw ≃ 10 −9 × f n 200 Hz where the Hubble constant has been assumed 100 km s −1 .
Hollow spherical detectors can set very interesting limits on the GW background. In particular, following recent estimations based on cosmological string models [30], it emerges that experimental measurements performed at the level of sensitivity attainable with these detectors would be true tests of Planck-scale physics.
Eqs. (4.15) and (4.16) hold for whichever cross-correlation experiment between two GW detectors adjacent and aligned for optimum correlation. An interesting consequence is that the sensitivity of a hollow sphere-interferometer observatory will be unprecedented. It can be worth to build a hollow spherical mass detector close to a large interferometer, like LIGO or VIRGO, to perform stochastic searches [31].
V. CONCLUSIONS
In this paper we have been mainly concerned with the problem of how an elastic hollow sphere responds to a GW signal impinging on it. To address this problem we have developed an analytical procedure to fully sort out the eigenfrequencies and eigenmodes of that kind of solid, then applied it to calculate the GW absorption cross section for arbitrary thicknesses and materials of our solid.
When realistic hypotheses are made regarding the size and material of a possible GW detector of this shape, we have seen that a hollow sphere can be advantageous in several respects. It has all the features associated with its symmetry, such as omnidirectionality and capability to determine the source direction and wave polarisation. Also, its quadrupole frequencies are below those of an equally massive solid sphere, thus making the low frequency range accessible to this antenna with good sensitivity. We have investigated the system response to the classical GW signal sources (bursts, chirps, continuous and stochastic) for several sizes and materials, and seen that interesting signal-tonoise ratios are attainable with such a detector. Also, its bandwidth partly overlaps with that of the projected large interferometers [32,33], so potentially both kinds of detectors can be operated simultaneously to make hybrid GW observatories of unprecedented sensitivity and signal characterisation power.
While it seems possible to cool a 100 ton solid sphere down to 50 mK [34], the possibility of cooling a large hollow sphere at such low temperatures, as well as the fabrication technique and the influence of cosmic rays on a low-temperature GW detector of that shape and dimensions, are currently under investigation. | 6,367.2 | 1997-07-30T00:00:00.000 | [
"Physics"
] |
Compression of ECG signals using variable-length classifıed vector sets and wavelet transforms
In this article, an improved and more efficient algorithm for the compression of the electrocardiogram (ECG) signals is presented, which combines the processes of modeling ECG signal by variable-length classified signature and envelope vector sets (VL-CSEVS), and residual error coding via wavelet transform. In particular, we form the VL-CSEVS derived from the ECG signals, which exploits the relationship between energy variation and clinical information. The VL-CSEVS are unique patterns generated from many of thousands of ECG segments of two different lengths obtained by the energy based segmentation method, then they are presented to both the transmitter and the receiver used in our proposed compression system. The proposed algorithm is tested on the MIT-BIH Arrhythmia Database and MIT-BIH Compression Test Database and its performance is evaluated by using some evaluation metrics such as the percentage root-mean-square difference (PRD), modified PRD (MPRD), maximum error, and clinical evaluation. Our experimental results imply that our proposed algorithm achieves high compression ratios with low level reconstruction error while preserving the diagnostic information in the reconstructed ECG signal, which has been supported by the clinical tests that we have carried out.
Introduction
An electrocardiogram (ECG) signal, which is a graphical display of the electrical activity of the heart, is one of the essential biological signals for the monitoring and diagnosis of heart diseases.ECG signals recorded by the digital equipments are most widely used in the applications such as monitoring, cardiac diagnosis, real-time transmission over telephone networks, patient databases and long-term recording.Some key parameters such as the sampling rate, sampling precision, number of leads and recording time play an important role in the increase of the amount of data collected from an ECG signal.Evidently, when continuously generating the huge amount of ECG data, in order to be able to process these data, we need the proper equipments that have the high storage capacity.On the other hand, when the equipments are used in the remote monitoring activities, they must have the wide transmission band.Therefore, in order to achieve removing the redundant information from the ECG signal with retaining all clinically significant features including P-wave, QRS complex and T -wave [1,2], we need to employ an effective ECG compression algorithm.
In the recent years, the studies dealing with the modeling and compression of the ECG signals essentially utilize one of the following methods: (i) The direct time-domain methods, (ii) the transform-based methods, (iii) the parameter extraction methods [2,3].
Among the proposed methods in the literature, one of the most known and powerful algorithm is the set partitioning in hierarchical trees (SPIHT) compression algorithm [21].Another efficient ECG compression method uses the cosine modulated filter banks to reconstruct the original ECG signals [25].In [22], another ECG compression method is proposed, which is based on the adaptive wavelet coefficients quantization by using a modified two-role encoder.Most recently, the waveletbased ECG data compression system having a linear quality control scheme was proposed [20].
In some previously published articles [26,27], it has been shown that the predefined signature and envelope vector sets best describe the speech and ECG signals.It has also been demonstrated in [26,27] that, by introducing and employing a new systematic procedure called SYMPES, the predefined signature and envelope vector sets have been used to model the speech and ECG signals frame by frame.In this procedure, each frame of the reconstructed speech or ECG signal is represented by a combination of multiplication of three major quantities, which are the gain factor, the signature vector, and the envelope vector.
In [28], a novel EEG compression method was proposed, which is based on the construction of the classified signature and envelope vector sets (CSEVS).The signature and envelope vector sets obtained for the speech and ECG signals in [26,27] were then extended to the EEG signals in [28] to obtain the signature and envelope vector sets for the EEG signals.Then, these vector sets were classified by using k-means clustering algorithm to determine the centroid vectors of each classified vector sets, which were to be used in constructing of the CSEVS.The main advantage of the method proposed in [28] is that it reduces the size of vector sets and computational complexity of the searching and matching processes.The method introduced in [28] also proved to have advantages over the wavelet transform coding technique as far as the average RMSE, average PRD, average PRD1, and CR(%) are concerned.
In [29], a new block-based image compression scheme was presented based on generation of classified energy and pattern blocks (CEPBs).In the method, first the clasified enesrgy blocks (CEB) and clasified pattern blocks (CPB) sets were constructed and any image data can be reconstructed block by block using a block scaling coefficient and the index numbers of the CEPBs placed in the CEB and CPB.The CEB and CPB sets were constructed for different sizes of image blocks such as 8 × 8 or 16 × 16 with respect to different compression ratios (CRs) desired.At the end of a series of the experimental works, the evaluation results show that the proposed method provides high CRs such as 21.33:1, 85.33:1 while preserving the image quality at 27-30.5 dB level on the average.When the CR versus image quality (PSNR) results in the proposed method compared to the other works, it seems that the method is superior to the DCT and DWT particularly at low bit rates or high CRs.
In the current article, we propose a new and more efficient ECG compression algorithm which relies on the variable-length CSEVS (VL-CSEVS) and wavelet transform.In this proposed algorithm, we first use the energy based segmentation method to represent an ECG frame with high energy by short segments and an ECG frame with low energy by long segments.Then, the unique patterns VL-CSEVS are generated from these ECG segments of two different lengths.Thus, when compared with the previous results obtained in [26][27][28], our new method significantly improves the CR, and then the use of wavelet transform based residual error coding both enhances the quality of the reconstructed signal.In order to check the performance of our new method for a different classes of ECG signals, given that the original unique patterns VL-CSEVS remain unchanged, we have used the MIT-BIH compression test database called the worst-case database by the its developers [15].
The parameters PRD, MPRD, and maximum error (MAXERR) for compression the ECG of the unique pattern VL-CSEVS derived from the original ECG are measured by changing both the training set and the test set at each round of the 4-fold cross-validation method, whose average values are used to determine the performance of our new proposed method.We should point out here that the sampling frequency, resolution, mean value, and amplitude value of the ECG signals in the test database are different from those of the ECG signals used to construct the unique patterns VL-CSEVS.
The article is organized as follows.Section 2 describes the details of the newly proposed compression algorithm.In Section 3, we present the experimental results obtained by using the proposed compression algorithm, which are then compared with some known successful ECG compression methods reported in [21,22,25].In Section 4, we give the conclusion.
Proposed compression algorithm
In this article, an efficient ECG compression algorithm which is based on the modeling ECG signals via VL-CSEVS and employs the residual error coding by using the wavelet transform is proposed.One of the main advantages of our method is to ensure the quality in the reconstruction of an ECG signal.
We use the variable-length approach to generate the CSEVS.In this context, an ECG frame with high energy carrying useful information such as QRS complex is represented by the short segments.At the same time, an ECG frame with low energy with or without possessing clinical information is represented by the long segments.The length of the short segments is determined to be 16 and that of long segments is determined to be 64.
In determination of the length of the segments, we first check the relationship between the segment length and blocking effect for various segment lengths, and then choose the segment length which minimizes the blocking effect on the reconstructed ECG signal.
After the variable-length segmentation process, the signature and envelope vectors are extracted from many of thousands of ECG segments.Then, the signature and envelope vectors are classified by employing effective kmeans algorithm which helps us to eliminate the similar signature and envelope vectors.Thus, the VL-CSEVS are constructed by using non-similar signature and envelope patterns, implying that the VL-CSEVS will have unique patterns.
In conclusion, the ECG segments with low energy can be more compressed than the ECG segments that have high energy.Thus, our new method allows us to significantly increase the total CR of ECG signals.On the other hand, some ECG frames containing p-wave or t-wave carries valuable clinical information may have low energy.In the reconstruction of these types of ECG frames, the reconstruction error is substantially decreased by employing the wavelet based residual error coding technique.The proposed algorithm is superior to the powerful wavelet based ECG compression methods, especially at low bit rates.
The newly proposed algorithm basically consists of three processing stages: the pre-processing stage, the stage of construction of the VL-CSEVS, and reconstruction process of an ECG signal.In the following subsections, each stage is explained in details.
Preprocessing stage
The preprocessing is one of the most important stages of an ECG compression method because it plays a crucial role in enhancing the compression performance of the algorithm.The preprocessing stage is carried out in three steps.
The first step of this stage normalizes the frequency of each signal to 500 Hz using cubic spline interpolation technique.The amplitude normalization is the second step of this stage, which normalize amplitude of each ECG signal is between 0 and 1 using the following formula The final step of this stage is the segmentation process.There are two traditional ECG segmentation methods in the literature.The first method is based on the QRS detection algorithm.In this method, each QRS peak of heartbeat or each R-R interval is identified as a segment.Due to the heart rate variability, this segmentation method increases the computational cost of the compression process.The other method is the fixed-length segmentation which is one of the mostly used method in the past literature.In our previous work [27], we employed the fixedlength segmentation method to split ECG signals into short and quasi-periodic segments.In this research work, energy based segmentation method that splits ECG signal into two different lengths according to the energy variation of the signal is utilized to improve the compression performance of the proposed algorithm.This segmentation method divides the ECG frames with high energy into the short segments whose length is 16 samples while the ECG frames with low energy are divided into the long segments whose each contains 64 samples.
When the preprocessing stage is completed, the normalized ECG segments of two different lengths are obtained to construct the VL-CSVES which are explained in detailed in the next subsection.
Construction of the VL-CSEVS
A normalized ECG segment X i obtained in the preprocessing stage can be spanned to a vector space in the following form.
where the V i represents orthonormal vectors in the matrix notation and C i are uncorrelated coefficients such that in which L F is the number of the samples in any ECG segment which is equal to either 16 or 64.Now, any normalized ECG segment X i can be represented as a weighted sum of the orthonormal vectors v ik as follows: (6) This equation may be truncated by taking the first p term.In this case, the approximation X ip and approximation error ε i are given in the following form.
The orthonormal vectors v ik are determined by minimizing the expected vector of the error vector i with respect to v ik in the LMS sense.Eventually, these vectors which are represented by v ik are the eigenvectors of the autocorrelation matrix R i of the X i segment.The autocorrelation matrix R i can be calculated as follows The entries of the matrix R i are computed by x j+1 x j+1+d (10) The above mentioned LMS process results in the eigenvalue problem.Hence, the eigenvectors v ik of the autocorrelation matrix R i and the corresponding eigenvalues l ik are found by solving Since the autocorrelation matrix R i is a positive semidefinite, real-symmetrical and toeplitz matrix, the eigenvalues l ik are real and non-negative and the eigenvectors v ik are all orthonormal.
The eigenvectors v ik can be arranged according to the descending order of the magnitude of their corresponding eigenvalues l ik .
In this case, the eigenvectors v i1 that have the highest energy associated with the highest magnitude of the eigenvalue represents the direction of the greatest variation of the signal and they are also called signature vectors.The signature vector may approximate each segment that belongs to the original ECG.Therefore, each segment X i is represented as follows Once the approximation ( 13) is obtained, it can be converted into the equality by means of an envelope diagonal matrix A i for each segment.Thus, X i is calculated by (14) In ( 14), the diagonal components a ir of the matrix A i are computed in terms of the components v i1r of the signature vector v i1 and the component x ir of the segment vector X i by following simple division.
x ir c i1 v i1r (15) In this research work, many ECG signals were examined and thousands of segments which contain either 16 or 64 samples were analyzed.After the generation of all of the signature and the envelope vectors employing the procedure given above, these vectors were plotted.It has been observed that there were a lot of signature vectors similar to each other.This type of repetitive similarity properties have also observed among the envelope vectors.The vectors in the signature and envelope side were clustered by using an effective k-means clustering algorithm [1] and the centroid vectors of each cluster were determined for these two vector types.These centroid vectors are called as classified signature vectors and classified envelope vectors.The block diagram that explains this procedure is given in Figure 1.
After determination of the centroid vectors for each cluster of the signature and envelope vectors, two types of sets were constructed by using these centroid vectors.The centroid vectors obtained from the signature vectors and the envelope vectors are renamed as classified signature vectors (CSV) and classified envelope vectors (CEV), respectively.The CSVs are collected under either the classified signature set-16 (CSS 16 ) or the Classified Signature Set-64 (CSS 64 ) according to their segment length.The CSVs are represented by Ψ NS (n); NS = 1, 2, ..., R, ..., N S .The integer n represents total number of samples in the each CSV while the integer N S designates the total number of the CSVs in the CCS 16 and CCS 64 , individually.In the same way, the CEVs are collected under either the CES 16 or the CES 64 according to their segment length.The CEVs are represented by F NE (n); NE = 1, 2, ..., K, ..., N E .The integer n represents total number of samples in the each CEV while the integer N E denotes the number of the CEVs in the CES 16 and CES 64 , individually.Afterwards, CSS 16 , CES 16 , CSS 64 , and CES 64 are collected in the VL-CSEVS.Details of the reconstruction process of measured ECG signals by means of VL-CSEVS are given step by step in the following subsection.
Reconstruction process of ECG signals by using VL-CSEVS
The reconstruction process of the proposed method consists of two operations: encoding and decoding.The block diagrams of the encoders and decoders are given in Figures 2 and 3, respectively, which are explained step by step in next subsections.
Encoder
Step 1: The original ECG signal is first normalized, and then it is segmented in the pre-processing stage.If the segment length is 16 the switch-codebook bit b SWCB is assigned as 1.Otherwise, b SWCB is equal to 0.
Step 2a: An appropriate CSV from either CSS 16 or CSS 64 according to the value of b SWCB is pulled out such as the error which is given below is minimized for all R = 1, 2, . . ., R, . . ., N s .
Step 2b: The index number R that refers to CSV is stored.
Step 3a: An appropriate CEV from either CES 16 or CES 64 according to the value of b SWCB is pulled out such as the error shown below is minimized for all K = 1, 2, . . ., K, . . ., N E .
Step 3b: The index number K that refers to CEV is stored.
Step 4: A new gain coefficient factor C i is replaced by C R by computing as follows, so that the global error given in ( 19) is minimized.
Step 5: At this step, the segment X Ai is approximated by Step 6: The above steps is repeated to determine the model parameters R, K, and C i for each segment of ECG signal and X rec is reconstructed.
Step 7: Residual error is figured out by subtracting Step 8: The residual error is down-sampled by two using cubic spline interpolation technique and three-level discrete wavelet transform using Biorthogonal wavelet (Bior 4.4) is applied to the down-sampled residual signal.
Step 9: The modified two-role encoder [22] is employed for coding the obtained wavelet coefficient, and thus, the encoded residual bit stream is obtained.
Step 10: Encoded bit stream of the index number of R is obtained by using Huffman coding.
Step 11: Encoded bit stream of the index number of K is obtained by using Huffman coding.
Step12: The new gain coefficients C i are coded by using 6 bits.
Decoder
Step 1: The encoded bit stream of the index number of R and K are decoded by using Huffman decoder.Step 2: For each segment, the index number of R and K are used to pull out the appropriate CSV and CEV from the VL-CSEVS according to the switch-codebook bit b SWCB .
Step 3: The each segment X Ai is approximated by the following mathematical formula Step 4: The reconstructed ECG signal X rec is produced by Step 5: The encoded bit stream of the residual signal is decoded by using the modified two-role decoder [22].
Step 6: The reconstructed residual signal err rec is produced by applying the inverse WT and up-sampling process by a factor of two, respectively.
Step 7: In the final step, the reconstruction process of the ECG signal is accomplished by adding the reconstructed residual signal to the reconstructed ECG signal as follows.
In the following section, the simulation results for the proposed compression algorithm are presented.
Evaluation metrics to measure the performance of the proposed compression algorithm
The performance of the proposed ECG compression algorithm and those given in [21,22,25] are evaluated by using two criteria which are the CR and distortion error.The CR is defined as the ratio between the number of the bits required to represent the original and reconstructed signals [30].This ratio is given by where b org and b rec represent the number of the bits required for the original and recon-structed signals, respectively.
However, the exact compression performance of the proposed method can only be analyzed when the CR is combined with the distortion error [30].The distortion error is usually considered to be the percentage rootmean-square differences (PRD) defined by where x org (n) refers to the original signal, x rec (n) denotes the reconstructed signal and N represents the length of the frame.
Since the distortion error basically depends on the mean value of the original signal, it can be masked the real performance of a compression algorithm.Therefore, the MPRD, which is totally independent of the mean value of the original signal, is suggested to be used to test the real performance of a compression algorithm.The MPRD is defined by where x denotes the mean value of the original signal [30].
It is well known in the literature that the PRD error measures the global quality of the reconstructed signal.In order to assess the real performance of the compression algorithm, not only the global error but also the local distortion must be examined.The local distortion indicates the distribution of the error along with the reconstructed signal and can be determined by using the MAXERR definition which is defined by All of the evaluation criteria explained above are employed in our experiments.We will compare the results of our algorithm with the results of the algorithms given [21,22,25] as far as the above mentioned evaluation criteria are concerned.
Mean opinion score test
In order to evaluate the performance of the proposed algorithm from clinical point of view, we use the test method of Mean Opinion Score (MOS) whose test parameters are given in Table 1, which is similar to test presented in [31].In Table 1, in section A, the cardiologist is asked to give a score value ranging from 1 to 5 in order to determine the similarity between the original and reconstructed signals.In section B, the cardiologist is asked to determine whether one can make a different diagnosis using the reconstructed version of the original signal without seeing the original signal.The process of section A is repeated for the important segment QRS and two critical waves P and T of the original and reconstructed version of the signals in section C.
The results of the MOS test are analyzed by using the two different distortion measures: MOS ERROR and Segmentation based MOS (SMOS).The MOS ERROR which is defined for a single reconstructed signal in [31] is expressed as follows: where a, an integer ranging from 1 to 5, is the measure of the similarity between the original and reconstructed signals.b is the answer to section B related to the diagnosis.If the answer is YES, b is equal to 0, otherwise, b is equal to 1 [31].
The SMOS defined as the second distortion measure shows the similarity between the important segment and waves of the original and reconstructed ECG signals specifically QRS segment, P and T waves.In this test, SMOS is determined for QRS segment, P and T waves, separately.The results obtained for each segment of the signal are represented by SMOS QRS , SMOS P , and SMOS T , respectively.We should point out here that the lower values of the MOS ERROR represent the better signal quality while the higher values of SMOS indicate the better signal quality.
Experimental results and comparisons
The compression algorithm explained in the previous section was first run in Matlab 7.0.1 platform, and then it was tested with ECG recordings on an Intel Core2 Quad 2.66 GHz processor.In order to evaluate the performance of the proposed compression algorithm, MIT-BIH Arrhythmia Database [32] and MIT-BIH Compression Test Database [15] were used in this research work.The MIT-BIH Arrhythmia Database consists of 48 ECG recordings which are sampled at 360 Hz and quantized at 11-bit resolution [32].On the other hand, the MIT-BIH Compression Test Database consists of 168 ECG recordings.Each data in this database is sampled at 250 Hz and quantized at 12-bit resolution [15].Each record in both database was first resampled at 500 Hz by using a cubic spline interpolation technique, and then the amplitudes of these records were normalized between 0 and 1.
The selection of the appropriate database is very important in order to construct the VL-CSEVS.The MIT-BIH arrhythmia database was selected as the training set because it contains a large set of ECG beats and many different examples of cardiac pathologies.Then, VL-CSEVS having the unique patterns were generated by analyzing a huge number of the ECG segments obtained from this database.
In the construction of the VL-CSEVS, 4-fold crossvalidation method was employed in order to remove the biasing effect.After the preprocessing stage, four different segments with a length of 6.4 s were extracted from each ECG recording in the MIT-BIH Arrhythmia Database.The group of the first segments were collected in the Subset-1.Similarly, Subsets-2, 3, and 4 were formed by the group of the second segments, the group of the third segments, and the group of the fourth segments, respectively.Thus, the four subsets S1, S2, S3, and S4 of the equal sizes were constituted.In other words, one subset was used as the test set and the remaining subsets were employed as the training sets for each round.Thus, each subset was used exactly once as the test set.In the first round, while Subset-1 was used as the test set, Subsets-2, 3, and 4 were employed as the training sets; in the second round, Subset-2 was the test set while Subsets-1, 3, and 4 were the training sets; and so on.After all these training, VL-CSEVS given in Table 2 were constructed for each round.In this table, b SWCB refers to the switch-codebook bit that controls the length of an incoming segment.b Ci , b R , b K are the minimum numbers of the bits required to represent the gain coefficient C i , and the integers N S and N E , respectively.
The performance of the proposed compression algorithm with respect to PRD, MAXERR, and CR was evaluated for each round and shown in Figure 4.The variation of PRD and MAXERR with CR at each round for the proposed compression algorithm was illustrated in Figures 4a, b, respectively.Besides, the mean performance of the results given in Figure 4 was presented in Table 3.
The proposed compression algorithm achieves the average CRs from 4:1 to 20:1 with average PRD a varies between 1.2 and 5.6%.Since the acceptable values of PRD were reported to be less than 9% in the literature [31], it can be emphasized that the results obtained in the proposed compression algorithm provide high CR with very low PRD levels.Furthermore, the average encoding and decoding times of the proposed compression algorithm are 0.687 and 0.318 s, respectively.
In this experimental research work, the proposed algorithm was compared with three well-known successful ECG compression methods SPIHT [21], Blanco-Valesco et al. [25], and Benzid et al. [22] in terms of average PRD, average MPRD, and average CR.In order to carry out a precise comparison among the proposed algorithm and other ECG compression methods given in [21,22,25], the same test dataset has been used for all these methods.This dataset contains 11 recorded ECG signals received from the MIT-BIH arrhythmia database (records: 100, 101, 102, 103, 107, 109, 111, 115, 117, 118, and 119).The comparison between our proposed method and the SPIHT [21] in terms of the average PRD and CR is illustrated in Figure 5.The comparison between our proposed algorithm and the Blanco-Valesco et al. [25] is given in Figures 6 and 7. Figure 6 depicts the variation of the average PRD with respect to the average CR and Figure 7 shows the variation of the average MPRD with respect to the average CR.Finally, a comparison between our results and those obtained by Benzid et al. [22] is given in Figure 8 In order to evaluate the worst case performance of the unique VL-CSEVS formed by using the MIT-BIH Arrhythmia Database, the proposed algorithm was also tested on the ECG signals received from the MIT-BIH Compression Test Database which is called the worst case test database by its developers [15].It should be noted that the sampling frequency, resolution, mean value, and amplitude value of the ECG signals in this database are completely different from those of the ECG signals in the MIT-BIH Arrhythmia Database which is used to construct the unique VL-CSEVS.The mean values of the results obtained at each round in the worst case analysis are presented in Table 4.The comparative results between the proposed algorithm, our previous method [27] and Hilton [15] are depicted in Figure 9.
As it can be seen from Table 4, the proposed algorithm achieves the average CRs from 4:1 to 20:1 with an average MPRD in the range of 1.627-8.631%.Moreover, Table 2 The number of CSV, CEV, and the required total bit in the VL-CSEVS the MAXERR, representing the local distortion, varies between 1.015 and 4.209%.Furthermore, the average encoding and decoding times of the proposed algorithm are 0.619 and 0.279 s, respectively.Figure 9 shows that the compression performance of our previous method mentioned in [27] is significantly improved by employing the VL-CSEVS in this research work.Also, it is clearly seen from Figure 9 that the compression performance of the proposed algorithm is significantly better than the results given in Hilton [15] in the light of the MPRD.
It is important to note that in Hilton [15], the PRD was used as the distortion measure.Although the PRD results are always smaller than the MPRD results due to the mean value of the signal, MPRD results obtained in the proposed algorithm are smaller than the PRD results obtained in Hilton [15].
In addition to the results of the objective evaluation methods given in Tables 3 and 4, several original ECG signals randomly chosen from test database and their reconstructed versions are displayed in Figures 10,11,12, 13 and 14 to reveal the visual quality of the ECG signals which are reconstructed by using the proposed compression algorithm.In Figures 10 and 11, the ECG records 118 and 117 which are randomly selected from the MIT-BIH Arrhythmia Database and their reconstructed versions along with the information of the CR, PRD, and MAXERRR are presented, respectively.Similarly, two different original ECG signals which are randomly selected from the MIT-BIH Compression Test Database and their reconstructed versions along with the information of the CR, MPRD, and MAXERRR are presented in Figures 12 and 13, respectively.As it can be clearly seen from these figures, both the morphological features of ECG signals are well preserved.
Clinical evaluation and discussion
In the clinical evaluation of our results, we have used 11 original ECG signals from the MIT-BIH Arrhythmia Dataset and 11 original ECG signals from the MIT-BIH Compression Test Database.These 22 original ECG signals were reconstructed at 4:1, 6:1, 8:1, 10:1, 12:1, 14:1, 16:1, 18:1, and 20:1 CRs by using our proposed method.As a result, these 22 original and 198 reconstructed ECG signals were evaluated by the cardiologists in order to validate the performance of the proposed algorithm from clinical point of view.
In the first step of the clinical evaluation, the cardiologist b expressed his opinions by examining these original and reconstructed ECG signals without applying any test.He explained that, the onset, off set and duration of the segments (or intervals) of the ECG signals such as PR, QRS, ST are correctly determined in the reconstructed or compressed ECG signals obtained by the proposed algorithm also at 20:1 CR.He pointed out that the proposed algorithm provides the nearly perfect reconstruction of the QRS segments at 20:1 CR.Although the p-wave and t-wave of the reconstructed ECG signals have more reconstruction error than the QRS segments of the reconstructed ECG signals, these distortions are not critically important in terms of diagnosis.He also explained that the quality of the reconstructed ECG signals is also acceptable at low bit rates.
On the other hand, he also emphasized that, it is very difficult to obtain high CRs with low reconstruction errors in the compression of the Holter ECG's or Stress ECG's which are recorded during movement or exercise, since these types of ECG records contain more variation or artifacts compared with ECG signals recorded in the resting mode.Therefore, the CR has to be selected by the cardiologists to ensure the clinical information depending on the ECG signal being compressed.In this context, it is an important advantage that the CR of the proposed algorithm can be adjusted easily according to the desired CR starting at 1 to 20 or higher.
Furthermore, an average opinion score is requested from the cardiologist in order to determine the clinical quality of the reconstructed ECG signals and he rated the clinical quality of the proposed compression algorithm at 20:1 CR as 4 over 5.As a result, the clinical operational range of the proposed compression algorithm is up to 20:1 CR.
In the second step of the clinical evaluation of the results obtained by our proposed method, the MOS test given in Table 1 has been applied to the original and reconstructed ECG signals by the cardiologist.c Then, the results of the MOS test were analyzed by means of MOS, SMOS QRS , SMOS T , SMOS P and MOS ERROR which are shown in Table 5.The variations of the MOS, SMOS QRS , SMOS T , and SMOS P with respect to the CR are also given in Figure 14.
When analyzing the values of MOS given in Table 5, it is clearly seen that the quality of all reconstructed ECG signals is acceptable also at the CR of 20:1.Furthermore, the results of SMOS QRS show that the proposed compression algorithm provides nearly perfect reconstruction of the QRS segments of the reconstructed ECG signals also at the CR of 20:1.In the light of the results of the MOS and SMOS QRS , the cardiologist pointed out that the proposed compression algorithm provides the useful CRs ranging from 4:1 to 20:1.On the other hand, the results of the SMOS T and SMOS P are lower in comparison with the results of SMOS QRS as shown in Figure 14.This is an expected result since the proposed compression algorithm further compresses the ECG segments with low energy in comparison with the ECG segments with high energy.
In order to analyze the values of both MOS and SMOS given in Table 5 in terms of diagnostic accuracy, we have employed the MOS ERROR .It was reported in [31] that the reconstructed signal quality can be classified into four different quality groups by using the CR without losing any diagnostic information.The other three are compressed at 18:1, 14:1, and 12:1, respectively, without losing any diagnostic information.
In conclusion, the ranges of the utility of the proposed compression algorithm are from 4:1 to 20:1 CRs depending on the ECG signal to be compressed.
Conclusion
We have introduced an efficient compression algorithm for ECG signals.other ECG compression methods given in [21,22,25].In this work, the VL-CSEVS which have unique patterns are specifically designed for ECG signals by using the relationship between energy variation and clinical information.
In this research work, ECG signals are segmented by using energy based segmentation so that ECG frames which have the high energy are represented by the short segments while the other frames with low energy are represented by the long segments.Therefore, both the size of VL-CSEVS and the computational complexity of the searching and matching process are reduced significantly in comparison with the predefined signature and envelope vector sets proposed in our previous works [26,27].
In conclusion, the CR of the proposed algorithm is significantly improved in comparison with the results of our previous method [27].Besides the good performance in the average CR, the low reconstruction error is ensured by applying the residual error coding.
The performance of the proposed algorithm is evaluated and compared with the three well-known ECG compression methods given in [21,22,25].The results of the performance evaluations show that the proposed algorithm provides the better results than the other methods in terms of the average CR, the average PRD, the average MPRD, and the MAXERR which are wellknown objective evaluation criteria.Moreover, the computational complexity of the proposed algorithm is also very low so that the average encoding and decoding times are almost 0.7 and 0.3 s, respectively.
In the experiments, the 4-fold cross-validation is employed to expose the relationship between the CR and PRD at different levels.The results obtained at each round show that there is almost no change in the PRD levels which correspond the same CR values.Furthermore, the performance of the VL-CSEVS is also tested on the ECG signals from a different database which is called as MIT-BIH compression test database.During the experiments, we observed some small differences in the PRD levels at the same CR values in the worst-case condition employing the MIT-BIH compression test database.These experimental results show that the proposed algorithm does not need any adaption process to reconstruct any ECG signals which have different characteristics.That is to say, the proposed VL-CSEVS do not require to re-created specifically for an ECG database so that the VL-CSEVS are constructed from the unique patterns extracted by examining many of thousands ECG segments and they are fixed.
We finally point out that the generation of the VL-CSEVS is carried out off-line and the unique VL-CSEVS are fixed and located at the receiver side of the system.In other words, the unique VL-CSEVS do not required to be redesigned in order to compress and reconstruct any ECG signal.On the other hand, the encoding and decoding parts of the proposed method are on-line procedures.When the average encoding and decoding times are analyzed it can be said that the proposed method is appropriate for real-time applications.
Endnotes
a Each signal in the MIT-BIH Arrhythmia Database included a baseline of 1024 added for storage purposes.Consequently, the PRD which is given in (27) is worked out by subtracting 1024 from each data sample.b The
Figure 1
Figure 1 The block diagram of the construction of the VL-CSEVS.
Figure 2
Figure 2 The block diagram of the encoder part of the proposed algorithm.
Figure 3
Figure 3 The block diagram of the decoder part of the proposed algorithm.
which compare the average PRD and average CR obtained by both methods.When analyzing the results illustrated in Figures 5, 6, 7 and 8, it can be clearly seen from these figures that the proposed compression algorithm outperforms the compared methods especially at low bit rates.
Figure 4
Figure 4 The performance of the proposed algorithm by means of CR, PRD, and MAXERR: (a) The variation of the average PRD with respect to the CR; (b) The variation of the average MAXERR with respect to the CR.
Figure 5
Figure 5 Comparison of the proposed algorithm with SPIHT in terms of average PRD and CR.
Figure 6 Figure 7
Figure 6 Comparison of the proposed algorithm with Blanco-Valesco in terms of average PRD and CR.
Figure 8
Figure 8 Comparison of the proposed algorithm with Benzid in terms of average PRD and CR.
Figure 9 Figure 10
Figure9Comparison of the proposed algorithm with our previous method and Hilton in terms of average MPRD and CR.
Figure 14
Figure 14 The variation of the average MOS, SMOS QRS , SMOS T , and SMOS P with respect to the CR.
Figure 15
Figure 15 The clinical evaluation of the proposed compression algorithm by means of MOS ERROR , CR, and PRD: (a) The variation of the average MOS ERROR with respect to the CR; (b) The variation of the average MOS ERROR with respect to the PRD.
Table 1
The MOS test ECG Signal Name:#### A. The measure of similarity between the original ECG signal and reconstructed ECG signal.
Table 3
The performance of the proposed algorithm tested on the MIT-BIH Arrhythmia Database with respect to average CR, PRD, MAXERR, encoding end decoding time
Table 4
The performance of the proposed algorithm tested on the MIT-BIH Compression Test Database with respect to average CR, MPRD, MAXERR, encoding and decoding times MOS ERROR .The reconstructed signal quality is classified to be very good for the values of MOS ERROR between 0 and 15%.If the value of MOS ERROR is between 15 and 35%, the reconstructed signal quality is determined to be good.The reconstructed signal quality is assigned not good if the value of MOS ERROR is between 35 and 50%.When the value of MOS ERROR is greater than 50%, the reconstructed signal quality is assumed to be bad.The variation of average MOS ERROR given inTable 5 with respect to the CR and PRD was illustrated in Figures 15a, b, respectively.When analyzing the results of the MOS ERROR , we have observed that 71.85% of the all reconstructed ECG signals is in the very good quality group while 21.05% of the all reconstructed ECG signals is in the good quality group.On the other hand, the rest of the reconstructed ECG signals has the values of MOS ERROR which are greater than 35%.As seen from Table6, the clinical test proved that the proposed compression algorithm achieves to compress 16 of 22 original ECG signals, used in the clinical evaluation, at 20:1 CR by preserving the diagnostic information.The three of these signals are compressed at 16:1
Table 5
The average results of the clinical test of the proposed compression algorithm with respect to the CR, MOS, SMOS QRS , SMOS T , SMOS P , and MOS ERROR
Table 6
The diagnostic performance of the proposed compression algorithm for the original ECG signals used in the clinical test clinical evaluation was carried out by Prof. Osman Akdemir who is a cardiologist in the Department of Cardiology at the T.C. Maltepe University, Istanbul Turkey.c The clinical test was carried out by Dr. Ruken Bengi Bakal who is a cardiologist in the Department of Cardiology at the Kartal Kosuyolu Yuksek Ihtisas Education and Research Hospital, Istanbul, Turkey. | 9,580.2 | 2012-05-31T00:00:00.000 | [
"Computer Science"
] |
Concepts of Cloud Computing and Protection of Data in Cloud Computing
The internet has changed the world in a strong way.it has traveled from the concept of parallel computing to distributed computing to grid computing and recently to cloud computing. Cloud computing is a recent trend in Information Technology that moves computing and data away from desktop and portable personal computers into large data center. The main advantage of cloud computing is the user cannot pay for infrastructure, its installation, required man power to handle such infrastructure and maintenance. Cloud computing technology is collecting success stories of savings, ease of use, ease of access and increased flexibility in controlling how resources are used at any given time to deliver computing capability. Cloud providers who can demonstrate that they protect personal information may be more truthful and therefore more attractive to potential Cloud users. The cloud service can be implemented in three different service models, such as Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). Data security and privacy protection issues are relevant to both hardware and software in the cloud architecture. This study is to review the concepts of cloud computing and different security techniques and protecting data in the cloud.
Introduction
"Cloud computing" refers to Internet-based computing that allows organizations to access a pool or network of computing resources that are owned and maintained by a third party via the Internet. (Reeta Sony A.L, Prof Sri Krishan Deva Rao, Bhukya Devi Prasad, 2013). The main goal of cloud computing is to make a better use of distributed resources, combine them to achieve higher throughput and be able to solve large scale computation problems. Cloud computing deals with virtualization, scalability, interoperability, quality of service and the delivery models of the cloud, namely private, public and hybrid. (Yashpalsinh Jadeja, Kirit Modi, 2012). As more companies, individuals and even governments place their data in the cloud, both customers and providers of cloud computing services must become acutely aware of the burgeoning laws and regulations restricting the collection, storage, disclosure and movement of certain categories of information. Cloud Computing has been very often portrayed and perceived as a new technology but it is also widely accepted as evolution of technologies such as client server architecture, World Wide Web, and networking. Some even call it mainframe 2.0. In 1960s mainframes were used for computing and transaction processing with users accessing the computing resources through 'dumb terminals'. 1980s saw the advent of protocols for networking and client server architecture. "The ability to connect users to computing and data resources via standardized networks emerged as a key enabler of cloud computing" (The Defense Science Board). The World Wide Web and the Internet followed in the 1990s along with enablers such as web browsers. The decade also saw the emergence of application service providers, offering software packaged as service over the internet. Refer Figure 1 for graphic on evolution of computing. (Trivedi, 2013) Figure 1: Evolution of Cloud Computing. (Trivedi, 2013) The term "cloud" originates from the world of telecommunications when providers began using virtual private network (VPN) services for data communications (John Harauz,Lorti M. Kaufman, Bruce Potter, 2009). Cloud computing deals with computation, software, data access and storage services that may not require enduser knowledge of the physical location and the configuration of the system that is delivering the services. Cloud computing is a recent trend in IT that moves computing and data away from desktop and portable PCs into large data centers (Marios D. Dikaiakos, George Pallis, Dimitrios Katsaros, Pankaj Mehra, Athena Vakali, 2009). The definition of cloud computing provided by National Institute of Standards and Technology (NIST) says that: "Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. (National Institute of Standards and Technology) With the large scale proliferation of the internet around the world, applications can now be delivered as services over the internet.
In the cloud computing environment, consumers of cloud services do not need anything and they can get access to their data and finish their computing tasks just through the Internet connectivity. During the access to the data and computing, the clients do not even know where the data are stored and which machines execute the computing tasks (Yunchuan Sun, Junsheng Zhang, Yongping Xiong, Guangyn Zhu, 2014).
Cloud computing system can be dividing into two sections: the front end and the back end. They both are connected with each other through a network, usually the internet. Front end is what the client (user) sees whereas the back end is the cloud of the system. Front end has the client's computer and the application required to access the cloud and the back end has the cloud computing services like various computers, servers and data storage.
Figure 2: Front end and Back end of Cloud Computing
Monitoring of traffic, administering the system and client demands are administered by a central server. It follows certain rules i.e. protocols and uses a special software called the middleware. Middleware allows The user or the client of cloud computing services would be in the role of a data controller, and the cloud provider in the role of its contractual data processor, performing certain tasks regarding data processing, such as storage, copying, transferring, etc. A reminder -any handling of personal data is regarded as data processing, and personal data are any information related to an identified or identifiable individual. Be cautious, even if you cannot tell by yourself, who the data is relating to, others may be able to identify the person, without disproportionate effort or means. Identifiability of an individual should be interpreted broadly and not only through the capabilities of a certain entity, and through the presence of the exact data that enable direct identification of an individual.
Certain aspects of data protection, such as the proportionality principle, the purpose of data processing, and retention periods are, of course, an integral part of the framework for data protection. However, in the context of cloud computing they do not present any specificity. The areas that are exposed the most are contractual personal data processing, data security, and transfer of data to third countries (Personal Data Protection and Cloud Computing, 2012).
Protection of Personal Information
One of the most challenging issues arising from cloud computing is protection of personal data. Various cloud aspects pose issues for privacy. Jurisdiction is one of the foremost issues affecting privacy and personal data protection in cloud computing. Within cloud computing, there are no borders. Within this environment, data can be broken up and stored in multiple data centers across multiple jurisdictions. Security becomes the second most important issue. Personal data is processed and stored outside the infrastructure in a data warehouse, which makes it vulnerable to hackers and other forms of data breaches. This vulnerability can result in lost, destroyed or improperly disseminated data. (Reeta Sony A.L, Prof Sri Krishan Deva Rao, Bhukya Devi Prasad, 2013). Data security has regularly been a major issue in IT. Data security becomes particularly serious in the cloud computing environment, because data are distributed in different machines and storage devices including servers, Personal Computers, and various mobile devices such as wireless sensor networks and smart phones. Data security in the cloud computing is more complicated than data security in the traditional information systems (Yunchuan Sun, Junsheng Zhang, Yongping Xiong, Guangyn Zhu, 2014). Data protection and security comprise one of the major challenges in cloud computing. Most organizations adopt network centric and perimeter security, which are normally based on firewalls and intrusion detection systems and which are very much the traditional security systems. This type of data and security protection does not provide sufficient protection against, privileged users, or other insidious types of security attacks, whereas in cloud computing services, the provider may benefit from a data centric approach with encryption, key management, strong access controls, and security intelligence to provide security for the data. Data which is collected from users or consumers for its intended collection purpose and onward transfer or third party use of the data must occur only when authorized by law, as stipulated by the terms of the privacy policy, or according to customer preference. If cloud computing providers fail to manage these challenges, they will be unable to maintain the trust and confidence of their users or consumers.
Conclusion
Cloud computing deals with computation, software, data access and storage services that may not require enduser knowledge of the physical location and the configuration of the system that is delivering the services. Cloud Vol.10, No.4, 2019 computing is a recent trend in IT that moves computing and data away from desktop and portable PCs into large data centers (Marios D. Dikaiakos, George Pallis, Dimitrios Katsaros, Pankaj Mehra, Athena Vakali, 2009). The barrier and hurdles toward the rapid growth of cloud computing are data security and privacy issues. Reducing data storage and processing cost is a mandatory requirement of any organization, while analysis of data and information is always the most important tasks in all the organizations for decision making. So no organizations will transfer their data or information to the cloud until the trust is built between the cloud service providers and consumers (Yunchuan Sun, Junsheng Zhang, Yongping Xiong, Guangyn Zhu, 2014). Information security is a fundamental part and one of the essential principles of all the legal acts regulating the field of data protection. As a narrower part of personal data protection it refers to the protection of integrity, confidentiality and accessibility of personal data. (Personal Data Protection and Cloud Computing, 2012). Data security becomes particularly serious in the cloud computing environment, because data are distributed in different machines and storage devices including servers, Personal Computers, and various mobile devices such as wireless sensor networks and smart phones. Data security in the cloud computing is more complicated than data security in the traditional information systems. Data protection and security comprise one of the major challenges in cloud computing. Most organizations adopt network centric and perimeter security, which are normally based on firewalls and intrusion detection systems and which are very much the traditional security systems. This type of data and security protection does not provide sufficient protection against, privileged users, or other insidious types of security attacks, whereas in cloud computing services, the provider may benefit from a data centric approach with encryption, key management, strong access controls, and security intelligence to provide security for the data. | 2,459.8 | 2019-05-01T00:00:00.000 | [
"Computer Science"
] |
Evaluating New Targets of Natural Anticancer Molecules through Bioinformatics Tools
Plants-derive compounds play crucial role in development of several anti-cancer drugs and they target proteins having significant regulatory effects on tumor cell cycle progress. Bioinformatics and cancer research overlap in many different areas in order to solve some problems in the field of treatment. In this study, the target and drug likeness of natural anticancer molecules are predicted by PASS software. Consequently, some new mechanisms of anticancer molecules have been introduced. They include Pseudobaptigenin with 0.702 PASS thresholds which revealed protein tyrosine kinase inhibitory. In addition, Kabophenol A and Carasinol B with score 0.652 and 0.669 respectively exhibited topoisomerase I inhibitory effects. Moreover, Docetaxel, 7-xylosyl-10-deacetyl paclitaxel and Artemether by exhibiting the highest PASS score are the strongest anticancer agents in our research. It is notice worthy all of studied agents exhibited high drug-likeness score and it means that they can be applied as drug.
Introduction
Cancer is disorder of cells growth. It starts when a normal cell begins to grow in an uncontrolled and invasive way. Cancer is thought to be caused through the interaction between genetic susceptibility and environmental toxins [1]. There are several ways which are applied for cancer treatment, for instance: Surgery, Chemo-therapy, Immunotherapy (monoclonal antibody), Radiotherapy and Gene-therapy.
Chemotherapy is a kind of cancer treatment; it acts by destroying cells which are dividing rapidly. It means that it also has an effect on normal cells, such as: bone marrow, digestive tract, and hair follicles. Therefore, it results in side effects on patients who are exposed to chemotherapy. Most of the chemotherapeutic drugs target mitosis cell division in order to inhibit the hyperproliferation state of tumor cells and subsequently induce apoptosis. The majority of chemotherapeutic drugs can be clustered in alkylating-agents, antimetabolites, anthracyclines, plants alkaloids, topoisomerase inhibitors and other antitumor agents. The anticancer drugs can be subdivided in three main groups based on their mechanisms of action: (i) drugs that interfere with DNA synthesis, (ii) drugs that induce DNA damages, (iii) drugs that inhibit function of the mitotic spindle [2]. Plants are important source of anticancer agents and plant-derived compounds have played crucial role in development of several useful clinical anti-cancer drugs.
Bioinformatics is the mathematical, statistical and computing method that aims to solve biological problems. Bioinformatics can be applied in the field of medical sciences to consider the molecular pathways of diseases [3]. By developing sophisticated bioinformatics software's such as PASS (Prediction of activity spectra for substances); it is now possible to predict some targets of anticancer molecules on the basis of structural formula of a substance accurately. This study focused on some natural anticancer molecules, including: Docetaxel, 7-xylosyl-10-deacetyl paclitaxel, Pseudobaptigenin, Kabophenol A, Carasinol B, 7β-hydroxysitosterol, Dehydrocostuslactone, and Artemether. By applying PASS software, we found targets of these natural molecules and classified them based on their targets in cancer pathway. We believe that it can be as an efficient approach for recognizing new mechanisms of anticancer compounds.
Materials and Methods Data
A practical database is the main step in bioinformatics projects. Collection of data from Pub med database was accomplished with general keyword "anticancer". Most data were gathered from 2010 papers; therefore, known anticancer molecules and some information relevant to their targets in apoptotic pathway were extracted from these papers. In this case, molecules were classified based on their origins. As a result, there were 7 groups of anticancer molecule such as Drug Bank, plants, fruits, microorganisms, semi-synthetic, synthetic and finally ungrouped anticancer agents [4].
Structure
Structural formulas of these molecules were investigated from Chemspider, Pubcheme and Wikipedia, respectively in order to discover orginal molecular structure of all compounds. Then, their skeletal structures were drawn by Chemschetch, Chemaxon and version 5.4 software. ChemAxon is a leader in providing Java based chemical software development platform for biotechnology and pharmaceutical industries and is applied to reach 3D structure of molecules within MDL SD file, Protein Data Bank (PDB), Tripos MOL2 formats ( Figure 1). has capability to predict many types of activity for a new substance. PASS normally utilizes input data with molecular structure Protein Data Bank (PDB), Tripos MOL2, MDL MOL and SD file formats then represent the structural information about molecules under study. PASS prediction can be interpreted by Pa and Pi values. Pa and Pi values are as measures that determine activity and inactivity of compounds. Pa -the probabilities of being active and close to 1.000, Pi -the probabilities of being inactive close to 0.000; therefore, the Pa and Pi values are vary from 0.000 to 1.000 and in general Pa+Pi<1.
PASS software works successfully on a PC running Vista, windows 7 and XP. In this study PASS version 1.917 was applied ( Figure 2) and molecules with Pa more than 0.6 have been selected and categorized based on their targets in cancer pathway.
MNA (Multilevel Neighborhoods of Atom) descriptors are one the sections in PASS software that are utilized for assessing of chemical similarity based on 2D description of molecules and appropriate for use in QSAR ( Figure 3). According to (Robustness provoke) MNA descriptor doesn't specify the bond type and comprises hydrogen according to a valence and partial charge of atoms; thus, it is based on structure representation.
Results
Nearly 242 molecular structures were collected from PubChem, Chemspider database and Google search, these compounds were evaluated by PASS software in order to screen compounds with high anticancer activity and specify their targets during cancer pathway. Among these natural molecules, approximately 9 agents revealed anticancer activity with Pa more than 0.6 and they targeted specific proteins throughout cancer pathway. As it can be seen from table 1, Docetaxel and 7-xylosyl-10-deacetyl paclitaxel are as Microtubule formation inhibitors, β-tubulin antagonists, antimitotic agents because they showed Pa>0.6. Thus, according to their high Pa scores these molecules are as promising anticancer agents. It can be seen form table 2 that mentioned agents target PTK, Topisomerase I, MYC, DNA and protein efficiently and Artemether with Pa 0.8 are categorized as the most strong agent compare to other 6 molecules .
PASS software also has capability to estimate drug likeness of under study agents. Drug-likeness referred to specific score estimated from the molecular structure and indicated that specific molecule have some proportional properties which can be active biologically or might show therapeutic potential. Consequently, all 9 agents exhibited druglikeness more than 0.9 and it means that they can be applied as drug.
Discussion
In this paper, a mathematical approach is discussed to evaluate anticancer activity of molecules based on Pa activity. PASS (Predication of Activity Spectra for Substances) software capable of anticipating more than 1500 pharmacological effect that can be efficiently applied to find new targets for some ligands to reveal new biological activity of various substances as natural molecules have less side effects compared to synthetic ones, we tried to discover new natural anticancer drugs which target specific cancer targets efficiently and develop the spectrum of efficient anticancer of molecules.
Microtubules are key components of cytoskeleton, and are formed from tubulin molecules. They have crucial role in development and maintenance of cell shape, in transport of vesicles, mitochondria and other components throughout cells, in cell movements, in cell signaling as well as in cell division and mitosis [5]. Microtubules are the target of variety of specific antimitotic drugs. Antimitotic drug exerts its effect by causing disorganized stabilization of microtubules in area away from the centriole or causing destabilization of mitotic spindle which is interfering with mitosis [2]. Docetaxel is semisynthetic analogue of Paclitaxel and they bind to microtubules with high affinity in order to stabilize microtubules and prevent from depolymerization. 7-xylosyl-10-deacetyl paclitaxel is isolated from Taxus Chinensis,which exhibits higher water solubility than Paclitaxel [6] demonstrated that 7-xylosyl-10-deacetyl paclitaxel induced mitotic cell cycle arrest and apoptosis.
It can be seen from table 1, both molecules have microtubule formation inhibitory, Docetaxel exhibited higher Pa score (0.986) compared to 7-xylosyl-10-deacetyl paclitaxel (0.757) and it means that Docetaxel can prevent from forming microtubules with more strength. In addition, they are as β-tubulin antagonists too and there isn't apparent difference among their PASS thresholds, therefore, they reveal this property with the same strength. Moreover, Docetaxel and 7-xylosyl-10-deacetyl paclitaxel have strong antimitotic activity and their scores are 0.992 and 0.868 respectively. As a result, both molecules are promising anticancer agents which act through binding to microtubules and tubulins. According to their drug-likeness score, they can behave as an efficient drug.
Protein Kinases are vital components of signal transduction pathways. They act through responding to the extracellular environment for regulating both cell growth and modification. Protein tyrosine kinases have enormous roles in cancer molecular pathogenesis, and they are as potential target for anticancer drugs currently [1]. There are two classes of protein tyrosine kinase inhibitors. One is bound to the ATP binding site and the other is bound to the substrate binding site of the enzyme. For instance, Pseudobaptigenin is an isoflavone which can be isolated from Trifolium pretense [7] revealed that this agent has an antiproliferative effects, but no reference has been indicated to the principle target of Pseudobaptigenin in cancer pathway. Fortunately our results revealed that Pseudobaptigenin with 0.702 PASS score has high Protein Tyrosine Kinase inhibitor activity.
DNA topoisomerases are a class of enzymes involved in the regulation of DNA super coiling during replication. Type I topoisomerases cut one strand of double-strand DNA, relax the strand and reanneal the strand [8]. Kabophenol A and Carasinol B are stilbene tetramers, which can be isolated from Caragana chamague and Caragana sinic. It is demonstrated that Kabophenol A has effect on MCF-7 cells. While previous references didn't mention to main target of Carasinol B and Kabophenol A in cancer pathway, we found that these two molecules, which have 0.669 and 0.652 score chronologically, have potent effects on Topoisomerase I. Therefore, they have anticancer property and exert their anticancer effects by inhibiting Top I enzyme.
Myc is a very strong Proto-oncogene which is expressed at elevated levels in different types of tumors. Myc is as a suitable target for development of novel cancer therapies and by designing drugs which inhibit tumor cell proliferation and/or increase apoptosis; we can extend the spectrum of anticancer agents. 7β-hydroxysitosterol, is a type of sterol, is extracted from Sellaginella Tamarriscina [9] revealed that this molecule exhibited potent cytotoxicity. Our results suggested that 7β-hydroxysitosterol by exhibiting 0.657 PASS threshold has strong Myc inhibitor activity.
Aromatase is an enzyme, which is a member of cytochrome p450 superfamily, it is located in the endoplasmic reticulum of the cell. The aromatase enzyme can be found in many tissues including gonads, brain as well as in tissue of endometriosis, uterine fibroids, breast cancer and endometrial cancer. Therefore, aromatase is as a critical target for cancers treatment. Aromatase inhibitors are class of drugs used in treatment of cancers; these agents block the synthesis of Estrogen in order to reduce the level of Estrogen. Consequently, the rate of cancer growth will be slowed. Dehydrocostuslactone is a sesquiterpene lactone extracted from Saussurea lappa and Aucklandia lappa. Dehydrocostuslactone induces cell cycle arrest at G2/M, causes cell cycle arrest via CDK1 down-regulation. According to our PASS results, this natural agent by exhibiting Pa 0.656 is an aromatase inhibitor agent and its high Drug-likeness score (0.995) related to the fact that it might possess functional groups or has physical properties which are consistent with most of known drugs [10].
DNA synthesis (DNA replication) refers to the process of copying each DNA strand into a new complementary strand. DNA replication inhibitors are commonly used as anticancer agents. Artemether is a methyl ether derivative of Artenisinin is isolated from the leaves of Artemisia annua [11] demonstrated that the natural agent arrests cell cycle at G2. Fortunately this agent exhibited the highest Pa score (0.801) in compare to other agents and it means that Artemether is as a strong DNA synthesis inhibitor and can be as promising anticancer drug.
Protein synthesis is the process in which cells build proteins. A previous study showed that deregulation of protein synthesis is a major contributor in cancer initiation and metastatic progression. Acemannan is a D-isomer mucopolysaccharid in Aleo vera leaves [12]. This compound displayed chromatin condensation, DNA fragmentation and laddering characteristic of apoptosis. It is notice worthy that Acemannan with 0.613 Pass threshold has protein synthesis inhibitor properties and it acts by inhibiting protein synthesis.
Conclusion
On basis of our study, it is mentioned molecules have strong anticancer characteristic and among all of them Docetaxel, 7-xylosyl-10-deacetyl paclitaxel and Artemether by exhibiting the highest PASS score are the most potent agents in our research. In addition, we found fundamental target of Pseudobaptigenin kobophenol A and Carasinol B throughout cancer pathway in order to provide new insight of subsequent research into these agents because in vitro and in vivo experiments of these finding haven't been applied until now. It is supposed that by applying these types of experiment new properties of these molecules will be appeared. | 3,003 | 2012-02-29T00:00:00.000 | [
"Medicine",
"Biology",
"Computer Science",
"Chemistry"
] |
Crystal structure of 9-butyl-3-(9-butyl-9H-carbazol-3-yl)-9H-carbazole
In the title carbazole derivative, C32H32N2, the molecule resides on a crystallographic twofold axis, which runs through the central C—C bond. The carbazole ring system is almost planar, with a maximum deviation of 0.041 (1) Å for one of the ring-junction C atoms. The crystal packing is stabilized by C—H⋯π interactions only, which form a C(7) chain-like arrangement along [110] in the unit cell.
Cg is the centroid of the C7-C12 ring.
S1. Comment
Carbazole based materials play vital roles in various areas of research. Various carbazole based heterocycles exhibit a diverse range of biological activities including pim kinase inhibitory (Giraud et al., 2014), anti-inflammatory, antioxidant (Bandgar et al., 2012), antimicrobial (Gu et al., 2014), antitumor (Wang et al., 2011), and anti-Alzheimer (Thiratmatrakul et al., 2014) activities etc. On the other hand, this class of materials has been identified as potential ones for OLED applications (Shi et al., 2012.;Tavasli et al., 2012;Kim et al., 2011;Zhuang et al., 2012). As an intermediate for the development of new carbazole based materials for biological/OLED applications, a dibutylbicarbazole has been synthesized and single crystals were grown by slow evaporation in ethanol.
The X-ray study confirmed the molecular structure and atomic connectivity of the title compound, as illustrated in The carbazole ring system is planar with a maximum deviation of -0.041 (1) Å for atom C7. The atom C13 attached to the carbazole ring system deviates by 0.250 (1) Å from the best plane of the carbazole ring system.
S2. Experimental
In a round-bottomed flask (250 ml), iron(III) chloride (44.80 mmol) in chloroform (100 ml) was taken under nitrogen atmosphere. Then, 9-butyl-9H-carbazole (11.20 mmol) (Ramalingan et al., 2010) in chloroform (50 ml) was added in a drop-wise fashion and was stirred at ambient temperature for 1 hour. After the addition of a sodium hydroxide solution (10%), the organic phase was separated and the aqueous phase was extracted with chloroform. The combined organic phases were dried and concentrated to obtain the crude product which was dissolved in chloroform (15 ml) and reprecipitated slowly using methanol (200 ml). The product, thus, obtained was filtered, dried under vacuum at ambient temperature. Single crystals of (I) were obtained by slow evaporation of ethanol solution of the title compound at room temperature.
S3. Refinement
H atoms were placed in idealized positions and allowed to ride on their parent atoms, with C-H distances of 0.93-0.97 Å, and U iso (H) = 1.5U eq (methyl C) and U iso (H) = 1.2U eq for other C atoms. The molecular structure of the title compound, showing the atom-numbering scheme. Displacement ellipsoids are drawn at the 30% probability level.
Figure 2
Molecular packing of the title compound, viewed along the a axis; C-H···π interactions are shown as dashed lines.For the sake of clarity, H atoms, not involved in hydrogen bonds, have been omitted for clarity. where P = (F o 2 + 2F c 2 )/3 (Δ/σ) max < 0.001 Δρ max = 0.24 e Å −3 Δρ min = −0.19 e Å −3 Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. | 807.8 | 2014-11-21T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Eastern genomics promises
A report on the Bio-IT World Asia meeting, Marina Bay Sands, Singapore, 6-8 June 2012.
The inevitable talks on big data
Chris Dagdigian (Bioteam), in an opening keynote entitled 'Bio-IT trends from the trenches' , summarized the current strategies for handling big data and set the scene for the whole meeting. Predictably, however, the main message to come from this and other talks was issues with scale. As increasing numbers of petabytescale data systems are deployed and the latest networkbased data movement using Aspera and GridFTP show impressive gains over physical (postal or hand delivery) data movement (at least using US networks), research centers are just about keeping their heads above water in dealing with data growth. With chemistries changing faster than data centers and research IT infrastructure can be refreshed, Dagdigian was understandably pessimistic about sustainability, but the work on CRAM compression algorithms from the European Bioinformatics Institute (http://www.ebi.ac.uk/ena/about/cram_ toolkit) was cited as one possible hope for the future. He was also dismissive of any cloud computing vendors lacking compatibility with the Amazon application programming interface (API). With the bold claim that these API-less 'cloud pretenders' , as well as platforms lacking self-service, do not equal a cloud, Dagdigian left the many following cloud-oriented talks with a tough act to follow. One subsequent speaker took these conclusions to heart and had to admit that what he was presenting was 'more of a fine mist' than a cloud.
Several cloud-oriented vendor talks still attempted to rise to this challenge. Representatives from Amazon Web Services, IBM, BT and Appistry highlighted their latest services, and Xing Xu presented on the new Easy Genomics cloud-based bioinformatics platform from the BGI Cloud team. The trial version of Easy Genomics has an attractive data analysis platform and an Aspera connection, and it includes six graphics processing unitbased and cloud-optimized sequencing tools, including the new Hadoop-optimized version of BGI's popular SOAP genome assembly suite (http://www.genomics.cn/ FlexLab/html/gaea.html).
Closing the reproducibility gap
Of the many critical issues coming out of the data-rich universe that we now find ourselves in, James Taylor (Emory University) focused his talk on what feels is the main crisis in genomics research reproducibility. With the life-sciences increasingly reliant on computational and data-driven approaches, access to the supporting data and tools and accessibility in using computational resources has not kept pace. With this in mind, Taylor honed in on the ways in which the popular Galaxy workflow environment (http://galaxy.psu.edu/) is working to address the problem.
The successes and challenges faced by Galaxy mirror on a smaller scale those of genomics as a whole: the 600 TB of data they host and tens of thousands of analysis jobs a month they are now handling is causing inevitable strains to their infrastructure. Taylor outlined how these pressures should be eased as increasing numbers of users use Galaxy Cloudman (http://usegalaxy.org/cloud), where they can take advantage of the elasticity and suitability as a reproducibility platform that a cloud-based platform with pre-configured software and a user-friendly interface can provide. The challenges for the Galaxy team in keeping on top of the increasing numbers of bioinformatics tools available has also been aided by Galaxy users wrapping and adapting tools for the Galaxy Toolshed, a directory of nearly 2,000 tools -over 10 times greater than those available through the main Galaxy site.
The growing popularity of Galaxy was apparent from the number of talks presenting work using it. For example, Andrew Lonie of the Victorian Life Sciences Computation Center (Melbourne) spoke about the Australian Genomics Virtual Library, which uses Galaxy and Bio-Linux in Australia's national research cloud, NeCTAR (http://www.nectar.org.au/). William Bartlett described Galaxy's integration into the computational and data management architecture at the National Center for Genome Analysis Support in Indiana, and Tin-Lap Lee also promoted the Galaxy platform that he and collaborators are putting together at the Chinese University of Hong Kong to handle data from the BGI and its GigaScience journal and database. Despite Galaxy being an open-source platform, even speakers from the pharmaceutical industry demonstrated its use, with Yaron Turpaz from Astra Zeneca highlighting the Cistrome platform recently published in Genome Biology (http://cistrome.org/ap).
Genomes, genomes, genomes
Several talks highlighted the wealth of genomes that are now publicly available. Yaron Turpaz presented the Asian Cancer Research Group's work on cancers prevalent in Asia and the recent public release of 88 paired hepatocellular carcinoma and normal genomes (available from GigaDB: http://dx.doi.org/10.5524/100034). Richard Tearle from Complete Genomics outlined their publicly available datasets, including 69 individuals and two matched tumor-normal pairs (http://www.completegenomics.com/ public-data/).
In light of the meeting's location, it was refreshing to see so much Asian genomic data on display and being publicly released. Jong Bhak (SNU/Theragen) provided an example of this in the rapid growth in the number of publically available Korean genomes: from Seong-Jin Kim's genome -the first Korean to be sequenced, in 2010, and incidentally also a speaker at the meeting -via 20 individuals in 2011, to the current 38 individuals, and on to the goal of sequencing 10,000 genomes over the next 3 years to capture all of the genetic variation in the relatively homogeneous Korean population. Being an open data advocate and influenced by his time in the laboratory of George Church, Bhak has made the data available from the Korean node of the Personal Genome Project (http://opengenome.net). Use of these data is available under his totally public domain BioLicense waiver (http://biolicense.org/), pioneered years before the recent interest in open data licenses and portable consent for personal genomics data.
A more genetically heterogeneous example was presented by Stephen Rudd of MGRC Malaysia, who presented on the MyGenome Malaysian genome project, which has so far sequenced 26 genomes from 6 of the many diverse ethnicities making up the Malaysian population. With the proliferation of new '-ome' words showing no signs of abating, the MGRC has delivered their own contribution to the field: the 'corporate pan-genome' . Following a company-wide 'spit party' , all 50 employees of the sequencing company were sequenced at 2x coverage. Rudd used this fun project as a demon strative starting point to discuss topics as diverse as contamination of saliva with someone's breakfast to the unique organizational occupational health insights that can be gleaned from this work. Although currently this may be an unusual teambuilding exercise, it is a window into the types of creative projects that are likely to be increasingly feasible as we move towards the 1,000 ringgit genome.
The scale and quality of the research presented were of a high international standard, but this conference still managed to impart some refreshing local flavor. The recurring themes of the promise and challenges associated with large-scale biological data are clearly global, but given that the Bio-IT World conference now spans three continents (including an upcoming European meeting in Vienna), the series provides opportunities to follow how researchers are tackling these issues from distinct regional perspectives. The addition of an Asia-Pacific meeting hopefully gives this previously under represented research community an opportunity to make their voices better heard and to work more closely on issues and genetic models that are more relevant to the region.
Competing interests
The author is an employee of the BGI and collaborates with a number of speakers at the meeting. | 1,685.6 | 2012-07-01T00:00:00.000 | [
"Geology"
] |
Progressive Modular Rebalancing System and Visual Cueing for Gait Rehabilitation in Parkinson's Disease: A Pilot, Randomized, Controlled Trial With Crossover
Introduction: The progressive modular rebalancing (PMR) system is a comprehensive rehabilitation approach derived from proprioceptive neuromuscular facilitation principles. PMR training encourages focus on trunk and proximal muscle function through direct perception, strength, and stretching exercises and emphasizes bi-articular muscle function in the improvement of gait performance. Sensory cueing, such as visual cues (VC), is one of the more established techniques for gait rehabilitation in PD. In this study, we propose PMR combined with VC for improving gait performance, balance, and trunk control during gait in patients with PD. Our assumption herein was that the effect of VC may add to improved motor performance induced by the PMR treatment. The primary aim of this study was to evaluate whether the PMR system plus VC was a more effective treatment option than standard physiotherapy in improving gait function in patients with PD. The secondary aim of the study was to evaluate the effect of this treatment on motor function severity. Design: Two-center, randomized, controlled, observer-blind, crossover study with a 4-month washout period. Participants: Forty individuals with idiopathic PD in Hoehn and Yahr stages 1–4. Intervention: Eight-week rehabilitation programs consisting of PMR plus VC (treatment A) and conventional physiotherapy (treatment B). Primary outcome measures: Spatiotemporal gait parameters, joint kinematics, and trunk kinematics. Secondary outcome measures: UPDRS-III scale scores. Results: The rehabilitation program was well-tolerated by individuals with PD and most participants showed improvements in gait variables and UPDRS-III scores with both treatments. However, patients who received PMR with VC showed better results in gait function with regard to gait performance (increased step length, gait speed, and joint kinematics), gait balance (increased step width and double support duration), and trunk control (increased trunk motion) than those receiving conventional physiotherapy. While crossover results revealed some differences in primary outcomes, only 37.5% of patients crossed over between the groups. As a result, our findings should be interpreted cautiously. Conclusions: The PMR plus VC program could be used to improve gait function and severity motor of motor deficit in individuals with PD. Clinical Trial Registration: www.ClinicalTrials.gov, identifier NCT03346265.
Perhaps even more significant than clinical and functional impacts of gait impairment, this pathological consequence of PD also determines social and economic costs due to falls and trauma. The significance of treating gait disturbances is reinforced by prior work showing that gait outcomes are related to longevity (17), cognitive decline (18), and adverse events (19). Therefore, rehabilitative interventions for the treatment or attenuation of gait impairments should be one of the primary foci of inpatients with PD.
One of the longest studied and most documented techniques for gait rehabilitation in PD is the use of sensory cueing (20,21). Several studies have shown improvement in electromyographic and spatiotemporal parameters of gait in PD patients undergoing gait training with auditory, visual, and tactile cues (22)(23)(24)(25)(26)(27).
While the mechanisms responsible for the improvements in gait due to sensory cueing are not fully understood, it is believed that individuals with PD have lower activity in certain brain areas that are responsible for the internal pathways needed to implement automatic and sequential movements (28). For instance, during sensory cueing for walking with visual cues (VC), patients with PD likely focus their attention on the discrete goal of each foot, hit a VC placed on the floor, and then use exteroceptive information (i.e., position of the next foot placement location) to plan each step individually at a cortical level (25). In addition to gaitoriented training (21,29), several different approaches using exercise therapy have been proposed aiming at improving mobility, muscular strength, resistance, balance, and aerobic conditioning, and endurance and axial alignment (30)(31)(32)(33). This heterogeneous mix of rehabilitation approaches also revealed indirect improvements in gait function (32,33). Furthermore, some cognitive rehabilitative techniques including action observation therapy and motor imagery have recently been proposed to facilitate gait and motor performance in patients with PD (34).
These previous findings then suggest that a variety of rehabilitative procedures can be effective and the optimal type of physiotherapy activity has not yet been determined (32,33). As such, a single multifaceted, structured, and comprehensive rehabilitative approach, acting on the different aspects of motor control (e.g., balance, muscle strength, flexibility, trunk and joint mobility, and muscle endurance) is needed for treating gait disturbances and motor impairment in patients with PD. The European Physiotherapy Guidelines for PD has proposed specific areas of intervention to address this need. These guidelines propose a series of exercises that can be combined into one rehabilitation program; however, they are currently not functionally connected to each other in a single structured rehabilitation procedure.
The progressive modular rebalancing (PMR) system is an exercise-based therapy based on proprioceptive neuromuscular facilitation (PNF) principles (35)(36)(37). PNF was further developed into an alternate approach (38) mainly focusing on trunk mobility, strength, endurance, and functional connection with proximal muscles. It may be particularly appropriate for patients with PD in whom the abnormal activation of trunk rotator and extensor muscles, trunk motion, ability to roll over on the bed, and axial rigidity are all associated with a high risk of falls (39,40). PMR proposes a trunk-specific exercise program that is preliminary and preparatory for gait exercises. The link between trunk and gait exercises is proposed as proximal movement stimulation, which is performed through rhythmic stimulation of scapular and pelvic girdle movements. PMR gait exercises are performed with a focus on interlimb coordination and reactive postural control and are preceded by exercises aimed at strengthening the bi-articular musculature involved in walking through PNF patterns (41).
In this regard, the European Physiotherapy Guidelines for PD reported only one trial on trunk muscle strength training for gait improvement and recommended the identification and correction of trunk muscle weakness in the design of rehabilitation programs, as traditional abdominal crunches alone were not effective (42).
In the present study, a rehabilitation program was proposed for patients with PD based on the combination of PMR and VC aimed at improving gait performance by improving balance and trunk control during gait movement. Our hypothesis was that the effect of VCs may interact with improved motor performance induced by the PMR treatment. Specifically, it is hypothesized that patients may improve their bi-articular hip muscle function and trunk and balance control through the PMR system and thus better exploiting the information (spatial and temporal) delivered by the VC, resulting in improvements in specific gait parameters, joint kinematics, and trunk motion.
The primary aim of this pilot trial was to establish whether an 8-week PMR exercise program focused on improving gait function in addition to VC training in people with PD was more effective than a same-duration program of conventional physiotherapy including VC as recommended by European Physiotherapy Guidelines for PD.
The secondary aim was to evaluate the effect of these interventions on the disease severity.
Participants
Sixty individuals with idiopathic PD admitted for outpatient rehabilitation were assessed for eligibility at two rehabilitation centers between May 2015 and December 2017. Forty subjects were ultimately included in the study. The inclusion criteria were as follows: (i) diagnosis of idiopathic PD according to UK bank criteria (43), (ii) Hoehn and Yahr stages 1-4 (44), and (iii) UPDRS-III gait sub-score of 1 or higher (45). All patients were in a stable drug program and acclimated to their current medication use for at least 2 weeks. Exclusion criteria were as follows: (i) cognitive deficits (defined as scores of <26 on the Mini-Mental State Examination), (ii) moderate or severe depression (defined as scores of >17 on the Beck Depression Inventory), and (iii) orthopedic and/or other gait-influencing diseases such as other neurological diseases, arthrosis, or total hip joint replacement.
A mandatory requirement for inclusion in the study was also the ability to walk independently for at least 8 m along the laboratory pathway without showing freezing of gait.
The study was approved by the ethics committee of Hospital Policlinico Umberto I of Rome/Sapienza University of Rome (Approval Number: 2346454) and patients provided written informed consent. All procedures conformed to the Helsinki Declaration. The study was registered with ClinicalTrials.gov (clinical trial identifier: NCT03346265). The detailed participant flow is shown in Figure 1A.
Study Design
This was a pilot, two-center, randomized, blind observer, controlled trial with crossover, following the recommendations of the Consolidated Standards of Reporting Trials (46).
Subjects participated in a baseline assessment session (T0, before rehabilitative treatment) and were randomly allocated to an 8-week rehabilitation program (A or B) followed by a 4-month washout period (patients did not have to perform any rehabilitative treatment), after which patients who received treatment A switched to treatment B and vice versa. A computerbased randomization schedule was used. All patients were assessed at the same center. Randomization was stratified according to a block of 20 numbers, so that each block comprised 10 patients randomly assigned to treatment A and 10 patients assigned to treatment B. Since all the subjects were evaluated at the same center, allocation was performed by an investigator not involved in subject recruitment or assessment at the end of the baseline assessment.
Both clinical (neurological visit and scale administration) and instrumental (gait analysis) assessments were performed at baseline, before rehabilitative treatment (T0), 4 weeks after the beginning of the rehabilitative treatments (T1), and at 8 weeks (at the end of rehabilitation program) (T2) (Figure 1B). Medication use remained constant throughout the study period, and all the treatments were performed at the same time of the day for each patient during the ON phase.
Participants were asked to maintain their daily pre-enrollment activity level.
Assessors, for both clinical and instrumental evaluations, were blinded to the allocation of treatment.
Intervention
The exercise program was conducted three times per week for 60 min over an 8-week period. Physical therapists with expertise in PD administered the exercise programs (ES, SFC, MP, DG, and GS). Each session was divided into muscular stretching exercises, aiming at increasing the step length and the mobility of the trunk, and tailored progressive exercise therapy. Stretching exercises were performed based on the contract-hold-relax principles, and trunk muscles were lengthened. Perception exercises reciprocally activating anterior elevation and posterior depression of both the shoulder and pelvis complex were performed. Trunk strength exercises were performed based on postural steps, moving from the supine to the upright position, and specific extensor muscle recruitment exercises. Recruitment exercises aiming to reach and maintain specific symmetrical positions (like supine bridging or the reverse tabletop pose) were performed by patients presenting camptocormia, and asymmetrical positions (like side sitting or side bridging) were performed by patients presenting with the Pisa syndrome. Physical therapists guided the patients during the walking training by stimulating upper limb movements at the same time. Walk training was also performed for the knees to better recondition reciprocal hip movements.
A further detailed description of the PMR technique with motor patterns is reported in Table 1.
For the VC training, white parallel transverse lines (white, 800 × 19 mm) were placed on the floor perpendicular to a dark walkway path at intervals equal to 40% of the patient's height. Lines were moved further apart by 0.05 m per stride every 3 or 4 days and did not bend through the chicane. Participants were asked to walk across the lines matching their step length to the interval between the stripes, turn, and return to the start line.
Treatment B
Conventional physiotherapy was administered according to the European Physiotherapy guidelines for PD (http://www. appde.eu/european-physiotherapy-guidelines.asp) and focused on the following areas based on the stage of the disease: selfmanagement support, prevention of inactivity and fear of falls, maintaining or improving global motor activities, improvement of physical performance, and improvement in the ability to perform transfer, balance, gait, and manual activities, reduce pain, and delay the onset of physical limitations.
Exercises included the following: standing up from and sitting down on the floor; standing and walking on foam with and without perturbation (pushes and pulls) to the trunk; sitting down onto and rising from a chair (while dual tasking); getting into and out of bed; rolling over in bed; walking and taking large steps with large amplitude arm swings; walking around and over obstacles; walking with sudden stops and changes in walking direction (including walking backwards); walking and maintaining balance while conducting dual tasks (such as talking, carrying an object, or turning head left to right to wall-mounted
STRETCHING EXERCISES
Gluteus medius stretching: The therapist flexes, adduces, and externally rotates the limb to be stretched. The contralateral leg is extended, adducted, and externally rotated and the knee flexed. The therapist asks to extend, abduct, and internally rotate the hip to be stretched against his body and, after relaxing, gains range of motion 3-5 times.
Gluteus maximus and adductor magnus stretching: The therapist flexes, abducts, and internally rotates the hip to be stretched. The contralateral hip is extended, abducted, and internally rotated, the knee flexed. The therapist asks to extend, adduce, and externally rotate the hip to be stretched against his resistance, to hold, and, after relaxing, gains range of motion. Repetitions: 3-5 times.
Biceps femoris stretching:
The therapist flexes, adduces and externally rotates the leg to be stretched with an extended knee. The contralateral leg is extended, adducted, and externally rotated and the knee flexed. The therapist asks to extend, abduct, and internally rotate the hip to be stretched against his body and, after relaxing, gains range of motion 3-5 times.
Semitendinous and semimembranosus stretching: The therapist flexes, abducts, and internally rotates the leg to be stretched with an extended knee. The contralateral hip is extended, abducted, and internally rotated, and the knee flexed. The therapist asks to extend, adduce, and externally rotate the hip to be stretched against his resistance, to hold, and, after relaxing, gains range of motion. Repetitions: 3-5 times.
Iliopsoas stretching:
The therapist extends, abducts, and internally rotates the hip to be stretched, and the knee is extended. The contralateral leg is flexed, abducted, and internally rotated, asking the patient to hold it. The therapists ask to flex, adduce, and externally rotate the hip to be stretched against his resistance, to hold, and, after relaxing, gains range of motion. Repetitions: 3-5 times.
Quadriceps femoris stretching: The therapist extends, abducts, and internally rotates the hip to be stretched and flexes the knee. The contralateral leg is flexed, abducted, and internally rotated, asking the patient to hold it. The therapists ask to flex, adduce, and externally rotate the hip and to extend the knee to be stretched against his resistance, to hold, and, after relaxing, gains range of motion by flexing the knee. Repetitions: 3-5 times.
Rotary torso muscles stretch:
The patient is side sitting, the therapist is behind him and rotates his torso, asks to rotate against him, to hold, and then, after relaxing, gains range of motion toward the concave side. Repetitions: 3-5 times.
Torso extensor muscles stretch: The patient is side sitting, the therapist is in front of him and flexes his torso by flexing his head and extending his arms, asks to lift up patient's arms and look at that against his resistance, to hold, and then, after relaxing, gains range of motion toward the concave side. Repetitions: 3-5 times.
(Continued)
Frontiers in Neurology | www.frontiersin.org The patient is side sitting. The therapist is standing in front of him and pulls the patient's arm up high and tilts the torso toward the concave side. The therapist asks to extend, abduct, and internally rotate the patient's arm, then to hold, and then, after relaxing, inclines the trunk further toward the concave side. Repetitions: 3-5 times.
TRUNK POSTURAL ALIGNMENT EXERCISES
Exercises for the erector spinae muscles: The patient is long sitting, the therapist is behind him and asks him to hold an isometric contraction against his resistance at the end of a bilateral flexion-abduction-external rotation pattern for 5 s at least.
Reverse tabletop pose exercise: After stretching the shoulder, arm, and lower limb muscles and performing the supine bridge exercise, the goal is to reach and maintain this position with extended and 90 • flexed knees, flexed head, and well-adduced shoulder blades.
Side bridge exercise:
After having stretched out the obliquus muscles of the tilted side and having recruited those of the weakest side, performing this exercise on the weakest side is the goal for patients presenting Pisa Syndrome. The exercise can be performed bearing on the elbow or by flexing the knee too.
WALKING TRAINING
Stimulation of the movements of the shoulder complex: The therapists rhythmically ask the patient to anteriorly elevate his shoulder toward his nose or posteriorly depress it by adducing his shoulder blade toward the column. First, the patient has to perceive the passive movement performed by the therapist and then has to perform it actively against resistance. When the patient can perform the two movements, the therapist asks him to reciprocally activate anterior elevation and posterior depression.
Stimulation of the movements of the pelvic complex: The therapists rhythmically ask the patient to anteriorly elevate his pelvis toward or posteriorly depress it. First, the patient has to perceive the passive movement performed by the therapist and then has to perform it actively against resistance. When the patient can perform the two movements separately, the therapist asks him to reciprocally activate anterior elevation and posterior depression.
Examples of progressive modular rebalancing exercises.
dots or photos and reporting on what is seen); turning around in open, narrow, and small spaces; and climbing steps or stairs.
The VC training was performed as an integral part of the conventional physiotherapy and consisted of visual white lines placed on the ground in the same way as in treatment A. This was performed three times a week for 30 min as recommended in the European Physiotherapy guidelines. The VCs were discretionally executed by the physiotherapists during the course of each treatment session.
The rehabilitation program is composed of a 60 min session once a day, performed 3 days/week.
Participants within this program were encouraged to progress, based on stated progression criteria. Progression in range of motion exercises, stretching exercises, upper and lower limb strengthening exercises, and improving balance, standing, sitting, transferring, and walking was encouraged in all participants.
Gait Analysis
Gait analysis was performed using an eight-ray infrared optoelectronic SMART-DX 500 motion analysis camera and system (BTS, Milan, Italy) with a sampling rate of 300 Hz. The system detected the motion of 22 passive spherical markers, placed over prominent bony landmarks according to the international recommendations and validated biomechanical models (48). Anthropometric data were collected from each participant (48).
Patients were asked to walk barefoot at a comfortable speed along a walkway. As we were interested in natural locomotion, only general qualitative instructions (e.g., "walk at natural speed you would use in your daily life, " "look forward, " and "do not turn or stop") were provided. The same instructions were given to all participants. Before the recording session, the subjects practiced for a few minutes to familiarize themselves with the procedure. Five trials were recorded for each locomotor task. All patients were recorded in ON state.
We focused on evaluating three important aspects of gait function: gait performance (e.g., speed, step length, hip joint range-of-motion [RoM]), gait balance (gait related-parameters, e.g., step width and double support duration), and trunk control (trunk kinematics).
Primary Outcomes
The following kinematic parameters were measured: stance phase duration (%), double support phase duration (%), cadence (step/min), step length, step width (m), mean speed (m/s), spatial asymmetry index, temporal asymmetry index, hip, knee and ankle flexion-extension RoM and trunk flexion-extension, lateral bending, and rotation RoM.
Secondary Outcomes
Disease severity was evaluated using the UPDRS-III (45). A neurologist with expertise in movement disorders and blinded to patients' allocation administered the UPDRS scale.
Statistical Analysis
A power analysis using the G * Power computer program (49), based on preliminary data from the T1 assessment (50) indicated a total sample of 24 participants to detect medium effects (d = 0.5) with 80% power using an unpaired t-test between means of α = 0.05. Due to the number of gait parameters considered as the primary outcomes of this pilot study, we calculated the sample size according to Whitehead et al. (51), who identified a conservative minimum sample size of 20-40 subjects for a pilot trial. Thus, we chose to consider a sample size ranging from a minimum of 24-40 subjects.
Intention-to-treat analysis (ITT) was conducted, with the ITT population defined as all randomized patients who provided at least one baseline efficacy assessment and attended at least one treatment session.
The Shapiro-Wilk and Levene tests were used to assess normality and homogeneity of variance, respectively, for all measures. Baseline characteristics were compared between the groups using either a Student t-test (parametric data) or Mann-Whitney U-test (non-parametric data) or, for categorical variables, using the Fisher exact test.
We assessed the effect of the rehabilitative treatments on both the primary and secondary outcomes through an ANOVA mixed-effect model taking into account longitudinal repeated measures including the effect of time (T0-T2) within each treatment group and interaction between time and intervention. Missing values were imputed with the last observation carried forward (i.e., baseline, intermediate evaluation). Greenhouse-Geisser correction was applied, when deemed necessary, to circumvent violations of sphericity (i.e., inequalities in the variance of the differences between factors). The Bonferroni correction for multiple testing was applied for pairwise comparisons to account for the familywise error rate.
A crossover design was used to reduce both the impact of inter-individual variability by exposing each subject to two different interventions and the effect of the disease progression by exposing subgroups to different treatment sequences.
Furthermore, a 4-month rest period (wash-out) between the rehabilitative treatments was introduced to reduce a potential carryover effect and reproduce a hypothetical basal condition after the former intervention. To test for possible carryover effects, we calculated the sum of the values measured in the two periods for each subject and compared across the two sequence groups using a test for independent samples.
Statistical significance was set at p < 0.05 for two-sided tests, and all analyses were performed using SPSS 20.0 (IBM SPSS).
RESULTS
Twenty patients (12%) of the total 60 identified were not enrolled, as they did not meet the inclusion criteria, or declined to participate. Forty patients consented to participate and were enrolled (Figure 1A, Table 2A). According to the H-Y classification, there were 11 patients in stage 1, 13 patients in stage 2, 12 patients in stage 3, and 4 patients in stage 4. Eight of these patients failed to complete T2 and thus were considered as having missing values and inputted with the intermediate observation carried forward (two patients in group A and six patients in group B) (Figure 1A). A total of 32 patients completed the 8week treatment (treatment adherence: 90.5% in Group A, 68.4% in Group B; p > 0.05 for the difference in primary endpoint). The assessments from the eight patients who dropped out were input forward in the final analysis. All patients were taking oral administrations of levodopa (18 patients), dopamine agonists (5 patients), or both (17 patients). No significant differences in demographics were noted between groups at T0 (all, p > 0.05) or in clinical characteristics, UPDRS-III, H-Y scale, and total Levodopa Equivalent Dose (LED) (all, p > 0.05) ( Table 2A).
Primary Outcomes (Gait Parameters)
A significant main effect of time * group interaction was found in speed, right and left stance duration, right and left double support duration, left step length, cadence, step width, spatial asymmetry, right and left hip RoM, right and left knee RoM, right and left ankle RoM, trunk flexion-extension, and trunk bending ( Table 2B).
Post hoc analysis revealed no significant differences between groups at T0 for almost all variables with the exception of right and left hip RoM. Significant improvements in almost all gait parameters were found in Group A compared to Group B at both or either T1 or T2, except for right ankle RoM and trunk rotation, which were not different between the two treatment groups (Figure 2).
A significant main effect of time was found in speed, left stance duration, spatial asymmetry, trunk flexion-extension, trunk bending, trunk rotation, right and left step length, right and left hip RoM, right and left knee RoM, and right and left ankle RoM (Table 2B).
Post hoc analysis revealed significant improvements at both or either T1 and T2 compared to T0 (Figure S1) in speed, left stance duration, right and left step length, trunk flexion-extension RoM, trunk bending, trunk rotation, right and left hip RoM, right and left knee RoM, and right and left ankle RoM.
A significant main effect of time was found on the UPDRS-III score (Table 2B). Post-hoc analysis revealed significant improvement (lower values) of the UPDRS-III scores at T2 compared to that at T0 (Figure S1). UPDRS-III score changed from 15.7 points at T0 to 14.4 at T1 to 14.1 at T2 (Table 2B).
Patient Crossover
In this study, 15 patients (37.5%) crossed over between the groups (8 patients from A to B and 7 patients from B to A) ( Figure 1A).
No carryover effect was found for either gait variables or UPDRS scores (p > 0.05). Due to the small number of subjects who crossed over, the non-parametric Mann-Whitney U-test was used to compare gait parameters, expressed as a percentage difference from the afterwashout baseline values between the two treatments, at T1 and T2. A significantly greater improvement in trunk rotation RoM (T1: Cohen's d = 1.28; T2: Cohen's d = 1.36) and right ankle RoM (T1: Cohen's d = 6.50; T2: Cohen's d = 5) was found with treatment A when compared with treatment B (Figure 3). No significant differences were found for the other parameters.
DISCUSSION
The present findings showed that a rehabilitative approach based on PMR plus VC for rehabilitation of gait function in people with PD appears to be more beneficial when compared to conventional physiotherapy executed according to European guidelines. Specifically, these findings can be summarized as follows: (i) both treatments improved gait function and motor function severity; (ii) patients who received PMR with VC presented with better results in gait performance (increased step length, speed, and joint kinematics), gait balance (increased step width and double support duration), and trunk control (increased trunk motion) than those who received conventional physiotherapy; and (iii) although only 37.5% of patients crossed over between the groups, there still were some differences in the primary outcomes.
The results are in concordance with previous data from Cochrane and systematic reviews, which reported that patients with PD showed a short-term positive effect on gait and balance functions and on motor function severity from several different rehabilitative techniques (32,33). However, as revealed in this study, PMR plus VC seems to be significantly better than conventional physiotherapy in improving almost all performance-related gait parameters, balance-related gait parameters, and trunk motion (Figures 2-4, Table 2). Thus, the PMR technique should be considered in addressing gait function in patients with PD. The European Physiotherapy Guideline for Parkinson's disease (52) identified five core areas in which a rehabilitation program should lead to improvements, depending on the patient's cognitive condition and the stage of the disease: physical capacity, weight transfer, manual activities, balance, and gait. Improvements in these areas can be expected to lead to improved performance in activities of daily living. However, the interventions used previously are largely heterogeneous (e.g., stretching, muscle strengthening, balance, postural exercises, occupational therapy, cueing, treadmill training) and, taken as a whole, are not part of a unique and directed rehabilitation system. In addition, presently, there is still no consensus about the optimal approach for PD patients (33). PMR is a context-adaptable rehabilitation method in which both patient assessment and exercise are trunk-centered. This aims to progressively recover first the control of the trunk and then its relationship with the limbs, combining them in multiple motor schemes performed in different postural configurations (see Table 1). Notably, in addition to an improvement in the gait spatiotemporal parameters and joint kinematics, we also found a significant improvement in trunk motion (Figure 4). Since a high percentage of patients with PD show postural abnormalities and poor trunk control (8), which predispose them to a high risk of fall (53), special attention should be paid to these aspects of motor control. Indeed, the head and trunk comprise 60% of the overall mass of the body. Thus, one's ability to precisely coordinate trunk movements during walking contributes significantly to creating a more energy-efficient gait pattern, coupling action of the trunk, and pelvis as a resonating pendulum and reducing overall momentum (54). PMR plus VC also showed better improvement in balance-related gait parameters (i.e., step width and double support duration), suggesting a positive effect on dynamic balance, which may prevent falls in patients with PD.
Remarkably, while although differences in improvements of biomechanical parameters were found, no significant differences emerged with respect to UPDRS-III scores. This may suggest that clinical scales alone are not exhaustively sensitive in determining changes in some motor aspects induced by physiotherapy and, thus, must be supplemented by objective instrumental measures. However, given that most patients were in stages 1 through 3 and only four patients were in stage 4, it is conceivable that our results from the PMR plus VC only support it as an effective method in patients in stages 1-3 H-Y. As such, we point out that our results may not be applicable to more severe cases of PD.
The main limitation of this study is the small sample size at crossover. Although the number of eligible individuals was relatively high, many patients were excluded due to transportation problems from the crossover portion of the study. The limitation of the small sample size at crossover, even given that it was partly offset by the adoption of sensitive quantitative measures of motion, suggests that the results should be interpreted with substantial caution. However, the crossover design, which evaluates intra-individual changes, still allowed the detection of a therapy response, which may have been missed in a similar sized parallel group study. Although the number of subjects at crossover did not meet the sample size criteria and thus did not allow for the same inferential statistics used in the main portion of the study, we still found some significant improvements with treatment A compared to treatment B (Figure 3). The trunk and right ankle RoM improved more with treatment A than with treatment B at T1 and T2. An important result from the crossover design was that no carryover effect was found after the washout period, suggesting that the effect of both treatments lasted no longer than 4 months.
Another possible limitation of this study is that it is difficult to conclude that either PMR alone or VC alone is better than conventional therapy. This study proposed using sensory cueing, which is a well-established technique for gait rehabilitation, as adjunctive treatment to the PMR system, within a unique rehabilitation program. We suggest that PMR treatment may result in globally improved trunk control, hip motion, strength, and endurance (in addition to other factors), predisposing patients to the improvement of the gait rhythm and automaticity induced by the use of the external VC.
However, VC was also an integral part of the conventional physiotherapy used in this study. The main difference was that in treatment A, the VC was systematically executed at the end of the PMR for 20 min, whereas in the conventional treatment, it was executed for 30 min and discretionally applied during the course of each treatment session. Although both treatment groups underwent VC, we cannot entirely explain or confirm the specific contribution of both the VC and PMR. For instance, the specific contribution of the VC could be different based on which rehabilitation treatment it was associated with. A threebranch trial design (conventional physiotherapy, PMR, and VC treatments) is needed to understand the specific contribution of the PMR alone compared to either conventional physiotherapy or VC.
Despite these limitations, this study proposes a comprehensive rehabilitation treatment regime in addressing key pathological outcomes of PD. Furthermore, the results are consistent and can be generalized to clinical practice. However, further studies are needed to assess the long-term effect of this rehabilitative approach.
In conclusion, the present findings show that PMR plus VC is effective in improving gait performance, balance, and trunk control and should be considered as a possible rehabilitative strategy for the treatment of PD and other neurodegenerative diseases.
DATA AVAILABILITY
All datasets generated/analyzed for this study are included in the manuscript and/or the Supplementary Files.
ETHICS STATEMENT
This study was carried out in accordance with the recommendations of name of guidelines, name of committee with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the name of committee.
AUTHOR CONTRIBUTIONS
MS contributed to the study design, revision, and manuscript elaboration. MP, DG, GS, and SC were in charge of the patient's enrollment and rehabilitation. GM, SC, FP, ES, and MB were in charge of the supervision and manuscript elaboration. GC, AR, and CC were in charge of data analysis, statistical analysis, and manuscript elaboration. | 7,719.6 | 2019-08-29T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Rogue waves on the periodic background in the extended mKdV equation
We construct new exact solutions of the extended mKdV (emKdV) equation. The exact solutions are obtained by nonlinearization of spectral problem associated with the travelling periodic waves and using the one-fold, two-fold Darboux transformations. We consider the dnoidal and cnoidal travelling periodic waves of the emKdV equation. Since the dnoidal travelling periodic wave is modulationally stable, the algebraic solitons propagating on dnoidal wave background. However, since the cnoidal travelling periodic wave is modulationally unstable, the rogue waves generated on the cnoidal wave background.
Introduction
For the Ablowitz-Kaup-Newell-Segur (AKNS) spectral problems, there are two local group constraints that generate local integrable mKdV equations [1]. There exist some effective methods to compute analytical solutions to mKdV equation. The author of Ref. [2] use the binary Darboux transformations to obtain the soliton solutions for matrix mKdV equations from the zero seed solution and Wang et al. used the finite-gap integration approach and Whitham modulation theory to give the exact solutions for the defocusing complex mKdV equation with step-like initial condition [3]. The algebraic method based on the nonlinearization of Lax pair [4][5][6] can be used to get explicit solutions for the AKNS spectral problems. The authors of Refs. [7,8] employ the nonlinear steepest descent method to study the long-time asymptotic behavior of the solutions of the mKdV equation and the focusing Kundu-Eckhaus equation. N-soliton solutions in both (1+1)-dimensions and (2+1)-dimensions was obtained through the Hirota direct method [9] and for nonlocal PT-symmetric mKdV equation have been discussed by the Riemann-Hilbert problems [10].
Rogue waves also called killer waves are short-lived, large amplitude waves that occur locally. Rogue waves were discovered in the deep ocean, the field of optics [11] and the capillary waves [12]. The formation of rogue waves may be connected with the modulation instability of the background wave. Peregrine was the first to show that modulation instability can be lead to a rapid increase in the wave amplitude [13]. In order to construct rogue waves on the periodic background, Chen and Pelinovsky [14] first combine the nonlinearization a e-mail<EMAIL_ADDRESS>(corresponding author) of spectral problem with the Darboux transformation method, and then by using these two approaches, rogue waves on the periodic background have obtained for the NLS equation [15,16], modified Korteweg-de Vries equation [14,17], Hirota equation [18,19], derivative NLS equation [20,21], sine-Gordon equation [22,23] and some other equations [24][25][26].
Two types of traveling periodic waves of mKdV equation are expressed by the Jacobian elliptic functions which are dnoidal and cnoidal periodic waves [14] and stability of periodic wave was studied in [27,28], where it was concluded that the dnoidal periodic wave is modulationally stable and the cnoidal periodic wave is modulationally unstable, rogue waves only generated on the background of cnoidal periodic wave and steady propagation of an algebraic solitons propagating on the background of dnoidal periodic wave. Recently, they generalized this results to the discrete mKdV and investigated modulational stability of the traveling periodic waves and obtained similar results [29]. This conclusion also can be extended to higher-order mKdV [30,31] and they have been studied fifth-order Ito equation and seventhorder mKdV equation separately.
The most general traveling periodic waves of mKdV equation written as a rational function of Jacobian elliptic functions [17], which first appeared in the Ref. [32]. It shows that Darboux transformations with the periodic eigenfunctions, the new solutions remain in the class of the same travelling periodic waves and Darboux transformations with the non-periodic eigenfunctions produce rogue waves generated on the background. The authors of Ref. [33,34] discussed fifth-order Ito equation and seventh-order mKdV equation separately.
In this paper, we consider the emKdV equation in the following form where α, β are arbitrary constants. Let q per =q(x − ct) be a travelling periodic wave of the emKdV Eq. (1) with the period L. We say that q(x, t) is a rogue wave on the background of the periodic The spectral instability of periodic waves represented by the Jacobi elliptic functions were investigated in the focused NLS [35]. For travelling periodic waves in the emKdV equation, from Ref. [36,37] it is clear that that the dnoidal travelling periodic wave is modulationally stable. Therefore, it is not a rogue wave in the sense of the definition (2). However, since the cnoidal travelling periodic wave is modulationally unstable, the rogue wave generated on the cnoidal wave background.
The article is organized as follows. In Sect. 2, we derive the traveling periodic wave solutions given by the Jacobian dnoidal and cnoidal elliptic functions of the emKdV equation, and then we use the nonlinearization of the Lax pair and get the periodic eigenfunctions of emKdV equation spectral problem related to the Jacobian elliptic functions. In Sect. 3, we compute the second, linearly independent solution of the Lax equations. We construct respectively the algebraic soliton propagating on the dnoidal wave background and the rogue waves generated on the cnoidal wave background using the one-fold and two-fold Darboux transformations in Sects. 4 and 5. Section 6 gives the conclusion.
The nonlinearization method
The emKdV Eq. (1) can be represented as the compatibility condition for the following Lax pair of linear equations and where where ψ = (ψ 1 , ψ 2 ) T and λ ∈ C is the spectral parameter. Then, we consider two families of travelling periodic waves of emKdV Eq. (1) are expressed by the Jacobian elliptic function and where k ∈ (0, 1) is the elliptic modulus.
Using the nonlinearization method, the following proposition give the precise expressions for eigenvalues λ 1 and periodic eigenfunctions (ψ 1 , ψ 2 ) T of the Lax pair (3) and (4) related to the travelling periodic wave solutions (5) and (6) of the emKdV Eq. (1).
Proposition 1
The travelling periodic wave solutions (5) and (6) satisfying where a 0 and a 1 are real constants given by Further information, for dnoidal elliptic function solution (5), we have and for cnoidal elliptic function solution (6), we have Then, the periodic eigenfunctions (ψ 1 , ψ 2 ) T related to the travelling periodic wave solutions (5) and (6) satisfying For dnoidal elliptic function solution (5), it follow from (8) and (9) that therefore, we get two particular real eigenvalues For cnoidal elliptic function solution (6), it follow from (8) and (10) that therefore, we obtain a pair of conjugate complex roots
Proposition 2 Let ψ = (ψ 1 , ψ 2 ) T be a solution to the Lax Eqs. (3) and
for dnoidal elliptic function solution (5), we have and for cnoidal elliptic function solution (6), we have Proof Substituting (16) into (3) and using (3), we have By using the relation (11), we can rewrite it as we rewrite (19) in the equivalent form Integrating it with the boundary condition φ(0) = 0, we have where α(t) is a constant of integration in x and may be depend on t. Substituting (16) into (4) and using (4), we have with the help of (5), (6), (11), (13) and (15), after the complex calculation. For dnoidal elliptic function solution (5), we have which can be yield the representation (17). For cnoidal elliptic function solution (6), we have which can be yield the representation (18).
New solution obtained from the dnoidal travelling periodic wave
The following lemma gives the one-fold Darboux transformation of the emKdV equation (1) Lemma 1 Let q is a solution of the emKdV Eq. (1), and (f 1 , g 1 ) T be a nonzero solution of the Lax pairs (3) and (5) with the eigenvalue λ 1 , then is a new solution of the emKdV equation (1).
with the help of (11), we can rewrite (24) as Figure 1 shows that when we choose α = β = 1, the solution surface of the algebraic soliton (25) propagating on the background of the dnoidal periodic wave (5) for the elliptic modulus k = 0.5 or k = 0.99.
New solution obtained from the cnoidal travelling periodic wave
The following lemma gives the two-fold Darboux transformation of the emKdV Eq. (1) Lemma 2 Let q is a solution of the emKdV Eq. (1), and (f i , g i ) T , i = 1, 2, be a nonzero solutions of the Lax pairs (3) and (5) with the eigenvalues λ i , i = 1, 2,, then is a new solution of the emKdV equation (1).
We use the two-fold Darboux transformation (26) to the cnoidal elliptic function solution (6) to obtain a new solution to the emKdV equation (1). Since the cnoidal elliptic function solution (6) is modulationally unstable, the rogue waves generated on the cnoidal periodic background.
(30) Figure 2 shows that when we choose α = β = 1, the solution surface of the rogue waves generated on Fig. 2 The solution surface for the rogue waves generated on the background of the cnoidal periodic wave with α = β = 1 and k = 0.5 or k = 0.99 the background of the cnoidal periodic wave (6) for the eigenvalue λ = λ 1 and λ =λ 1 given by (15).
Conclusion
In this paper, we construct the exact solutions for the emKdV equation. Since the dnoidal travelling periodic wave is modulationally stable, we use the one-fold Darboux transformation to construct the algebraic soliton propagating on the dnoidal wave background. On the other hand, since the cnoidal travelling periodic wave is modulationally unstable, we use the two-fold Darboux transformation to construct the rogue waves generated on the cnoidal wave background. | 2,226 | 2023-02-01T00:00:00.000 | [
"Physics",
"Mathematics"
] |
fHow ATP and dATP act as molecular switches to regulate enzymatic activity in the prototypic bacterial class Ia ribonucleotide reductase
Class Ia ribonucleotide reductases (RNRs) are subject to allosteric regulation to maintain the appropriate deoxyribonucleotide levels for accurate DNA biosynthesis and repair. RNR activity requires a precise alignment of its α2 and β2 subunits such that a catalytically-essential radical species is transferred from β2 to α2. In E. coli, when too many deoxyribonucleotides are produced, dATP binding to RNR generates an inactive α4β4 state in which β2 and α2 are separated, preventing radical transfer. ATP binding breaks the α−β interface, freeing β2 and restoring activity. Here we investigate the molecular basis for allosteric activity regulation in the prototypic E. coli class Ia RNR. Through the determination of six crystal structures we are able to establish how dATP binding creates a binding pocket for β on α that traps β2 in the inactive α4β4 state. These structural snapshots also reveal the numerous ATP-induced conformational rearrangements that are responsible for freeing β2. We further discover, and validate through binding and mutagenesis studies, a previously unknown nucleotide binding site on the α subunit that is crucial for the ability of ATP to dismantle the inactive α4β4 state. These findings have implications for the design of allosteric inhibitors for bacterial RNRs.
the conformational equilibrium away from 44 toward 22 (Ando et al., 2011). Importantly, RNR variants in which residue substitutions prevent formation of the 44-ring are no longer allosterically regulated by dATP (Chen et al., 2018), indicating that dATP-induced 44-ring formation is causative of the allosteric inhibition. Previous structural studies have indicated that ATP and dATP bind to the same site within the cone domains of 2: a structure of E. coli class Ia 70 RNR in the 44 state co-crystallized with dATP (Zimanyi et al., 2016(Zimanyi et al., , 2012 shows the nucleotide in the same site that was occupied by AMP-PNP (an ATP mimic) in a structure of 2 (Eriksson et al., 1997), raising the question of how the binding of nucleotides that differ by one hydroxyl group can lead to such dramatic oligomeric state changes.
80
The N-terminal regulatory "cone" domain containing the allosteric activity site is colored in green. (B) β2 from the structure of the active α2β2 complex (PDB: 6W4X) (C) A compact α2β2 structure is required for catalytic activity (PDB: 6W4X). This complex is in equilibrium with the free subunits and an inactive α4β4 complex (PDB: 5CNS) which is formed when dATP binds to the allosteric activity site. The presence of ATP promotes formation of the active complex. 85 al., 1997). The C root-mean-squared deviation (RMSD) for each structure comparison is shown in Table S4. 120 Figure 2. Overall structures of 2-dATP and 2-ATP agree with that of the previously reported structure of the E. coli class Ia RNR 2 subunit. The regulatory cone domain is colored in a darker shade of the overall structure. Overlay of α2 with AMP-PNP bound at the allosteric activity site (PDB ID: 3R1R) 125 colored in blue/dark blue, α2-dATP colored in teal/dark teal (this work) and α2-(ATP)2 colored in light green/dark green (this work) with nucleotides shown as spheres.
dATP binds similarly to activity site regardless of the presence of 2. The cone domain is at the N-termini of the 2 subunits and is comprised of a -hairpin and a four-helix bundle. We find 130 that dATP binds to the cone domains of the 2 structure in an analogous manner as was observed previously in the structure of the dATP-inhibited 44 state (Zimanyi et al., 2016). The 2 structure presented here is of higher resolution and contains an intact dATP molecule (the dATP molecule was hydrolyzed to dADP during 44 crystallization), and thus affords a more complete picture of dATP binding to the E. coli class Ia RNR enzyme (Fig. 3A,B, Fig. S4A,B). Specificity for the 135 adenine base of dATP is generated by hydrogen bonds between the side chain carboxylate of E15 (acceptor) and N6 (donor) and the backbone amide NH of N18 (donor) and N1 (acceptor).
The base is additionally held in place by packing interactions with residues V7 and I17 (-hairpin), I22 (helix 1), and F49 and I58 (helix 3) (Fig. 3D). The deoxyribose moiety sits between helix 1 and 3 of the four-helix bundle with a single hydrogen bond made between O3´ and helix 3 residue 140 H59. -hairpin residues K9 and R10 and helix 4 residue K91 provide charge neutralization and electrostatic interactions with the phosphates of dATP, which also coordinate a Mg 2+ ion. The only difference in the coordination environment between the 2 and 44 structures is the side chain of T55, which adopts different rotamer conformations that form different hydrogen bonds (Fig. 3A,B). This variation may be due to subtle changes in coordination caused by the loss of the gamma phosphate of dATP over the time course of the 44 crystallization. The first ATP molecule binds to site 1 by making very similar interactions as dATP (Fig. 3). The 175 hydrogen bonds to the adenine base are identical (Fig. 3B,C). The presence of the 2´-hydroxyl group results in a slight adjustment upward of the ribose ring of ATP compared to the ribose in dATP due to the packing of the ribose against I22 of helix 3 (Fig. 3D, E). This subtle shift of the ribose results in the loss of a hydrogen bond between the ribose 3´-hydroxyl group and H59, whose side chain flips 90° and now hydrogen bonds to a water molecule (Fig. 3F). The 180 phosphates of ATP also sit slightly higher in the activity site, with favorable hydrogenbonding/electrostatic interactions being made by K9, R10, and K91 (Fig. 3C). As a result of these subtle movements, the -hairpin is slightly shifted (Fig. 3F).
The second nucleotide-binding site (site 2) is directly adjacent to site 1 within the cone domain, 185 sandwiched between helix 1 and helix 4 (Fig. 4A). The phosphate groups of the two ATP molecules create an octahedral coordination environment around a central Mg 2+ ion. The additional negative charge from the second triphosphate moiety is stabilized by electrostatic/hydrogen-bonding interactions from K9 and K21 (Fig. 4B). R24 is also within 4 Å of the phosphate groups, although it does not make direct contacts in the crystal structure. The site 190 2 ATP ribose makes hydrogen-bonding interactions to the backbone carbonyl of F87 through O2´ and O3´, suggesting specificity at this site for ribonucleotides (Fig. 4C). Unlike in site 1, there are no specific contacts to the adenine base; instead, it is held in a hydrophobic pocket between helix 1 and 4.
195
Creation of second ATP binding site involves a coordinated movement of three side chains. The transition from the apo-or dATP-to the two ATP-bound form of the cone domain requires a dramatic and coordinated shift of three key residues-H59, F87 and W28-all three of which adopt different rotamer conformations upon ATP binding (Fig. 5, Fig. S5). As mentioned above, the side chain of H59 hydrogen bonds to O3´ of dATP but is flipped out of the activity site 200 when ATP is bound. This new position of the H59 side chain is in close proximity to the side chain of F87, and F87 must adopt a flipped-down position. This flipped-down position of F87 brings its side chain into the pocket occupied by the W28 side chain, causing W28 to flip 90° sideways. The net result of these side chain movements is formation of a new nucleotide-binding site in the cone domain. F87 movement creates the binding pocket for the ribose of the second ATP molecule 205 (Fig. 5, Fig. S5) and W28 movement creates a cavity for the base of the second ATP. The adenine base is nicely sandwiched between side chains of W28 and F97 as a result of W28's movement.
210
F97 does not change, as dATP only binds in site 1. (B & C) Space filling models of the interaction of the dATP/ATP molecules with helix 4 and the W28, H59, F87, and F97 residues.
Equilibrium binding assays are consistent with two higher affinity ATP binding sites at the activity site and one lower affinity ATP binding site at the specificity site. The presence of 215 two ATP molecules at the activity site was unexpected. Previous ultrafiltration binding studies by Ormö and Sjöberg (Ormö and Sjöberg, 1990) indicated that two ATP molecules bind per , which was interpreted as signifying the binding of one ATP molecule to the specificity site and one to the activity site. The 2-ATP structure shows three ATP molecules per in total: one ATP in the specificity site and, as described above, two ATP molecules in the activity site. To pursue the 220 possibility that a lower-affinity ATP binding site might have gone undetected if the ATP concentration was not high enough in the previous ultrafiltration binding studies, we have re-run these assays using a higher concentration of 3 H-ATP and using non-linear regression to analyze the data. In this assay, the concentration of free and bound 3 H-ATP is determined after separation of the protein by centrifugation in a spin filter, allowing for determination of equilibrium binding 225 parameters. The resulting binding curves from multiple experimental runs were fit with a one-state binding model that assumes all binding sites are equivalent. This simplifying assumption is necessary due to the impracticality of fully sampling the binding curve. Using this approximation, the Kd for ATP binding at 25 °C was estimated to be 158 ± 37 μM with the maximum number of binding sites being 6.8 ± 0.6 per 2 (Fig. 6). This number is consistent with this structure, which 230 shows 3 ATP molecules per (6 ATP molecules per 2).
To verify the presence of multiple binding sites for ATP at the activity site, the ultrafiltration experiment was repeated in the presence of 100 or 500 μM dGTP, which should bind at full occupancy to the specificity site given that the Kd for dGTP at the allosteric specificity site is 0.77 235 μM (Ormö and Sjöberg, 1990). In the presence of dGTP, the Kd for ATP was measured to be 120 ± 52 μM with a maximum of 3.8 ± 0.5 binding sites per 2 (Fig. 6). These data are consistent with the binding of two ATP molecules in each cone domain. No difference in binding was observed at the two different dGTP concentrations, suggesting that the specificity site is indeed saturated with dGTP under these conditions. Importantly, these data also indicate that the binding sites in 240 the cone domain are the higher affinity sites for ATP, further suggesting that the initial binding study was most likely reporting on two ATP molecules binding to the activity site. We can now also explain the previous observation that ATP binding is cooperative (Brown and Reichard, 1969b;Ormö and Sjöberg, 1990), which was hard to explain if ATP was binding to sites that are more than 40 Å apart, but easy to explain if ATP molecules are coordinated by a single Mg 2+ ion 245 and if the binding of the first ATP creates the binding site for the second ATP, as it appears to do.
Thus, in terms of the question of how the difference of one hydroxyl group (ATP vs dATP) can destabilize the 44 ring and shift the equilibrium back to an active 22 state, these structures and these binding data suggest that it is the one extra hydroxyl group of ATP, and one whole extra molecule of ATP, which are responsible for 44 destabilization. Ormö and Sjöberg (Ormö and Sjöberg, 1990) was used with modifications to determine the equilibrium binding parameters for ATP. Each point is an independent measurement. The sample consisted of 7-20 μM α2, 3 H-
255
ATP binding to 2 alone is in gray circles. 3 H-ATP binding to 2 in the presence of 100 or 500 μM dGTP is in black squares. These dGTP concentrations saturate the specificity site allowing for the determination of activity-site-only binding parameters. The amount of bound nucleotide was found by subtracting the amount of free nucleotide from total nucleotide and resulting binding curves from multiple experimental runs were fit with a one-state binding model that assumes all binding sites are equivalent. Using this approximation,
260
the Kd for ATP binding at 25 °C was estimated to be 158 ± 37 μM with the maximum number of binding sites being 6.8 ± 0.6. In the presence of dGTP, the Kd for ATP at the allosteric activity site was measured to be 120 ± 52 μM with a maximum of 3.8 ± 0.5 binding sites. These data are consistent with the binding of two ATP molecules at each activity site.
265
Identity of nucleotide in site 1 can be uncoupled from nucleotide binding in site 2 through a W28A substitution. With the confirmation from the binding assays that two ATP molecules bind to the activity site, we next sought to test the importance of ATP binding at site 2 to the overall activity regulation of E. coli class Ia RNR. We generated individual alanine variants of residues W28, F87, and F97, which comprise the second ATP binding site (Fig. 4). As described above,
270
F87 and W28 side chains move between rotamer conformations that alternatively block the second ATP from binding (residue positions in yellow in Fig. 5) and support the second ATP binding (residue positions in pink in Fig. 5), as signaled by the position of H59, which moves in response to the presence of ATP versus dATP in site 1. We reasoned that substitutions of either F87 or W28 with Ala would create room for ATP binding in site 2 regardless of the position of H59 275 and thus be independent of the presence of ATP in site 1. In other words, F87A and W28A substitutions would be expected to uncouple the binding of ATP to site 2 from the identity of the nucleotide effector in site 1. To test this idea, we obtained structural data for W28A-2 with ATP, with dATP/ATP, and with dATP/GTP (Table S1, S3). First, we wanted to determine if ATP could enter site 2 with dATP in site 1 in a W28 construct, so we grew crystals in the presence of a 280 mixture of 3 mM to 5 mM ATP and 1 mM dATP. The resulting 3.40-Å-resolution structure revealed clear density for two nucleotides in the activity site, but the identity of the site 1 nucleotide was ambiguous given the low resolution of the structure (Fig. S6A,B). Due to the lack of clarity about the identity of the nucleotide in site 1, this structure could not be used as evidence that a W28A substitution leads to site 1, 2 uncoupling. Unfortunately, attempts to obtain a high-resolution 285 structure with ATP and dATP have so far been unsuccessful.
Thus, we tried a different approach, co-crystallizing W28A 2 with 1 mM dATP and then soaking with 3 mM GTP. As described above, site 1 is specific for a nucleotide with an adenine base (see Table S1, S3). With the W28A substitution, the side chain of F87 is positioned down, creating the site 2 binding pocket, and allowing for nucleotide binding, despite the fact that H59 has not moved (Fig. 7). The density in site 2 is 295 consistent with GTP binding in a very similar conformation as found for ATP in the WT 2-(ATP)2 structure (Fig. S6D,H). We also confirmed that W28A substitution does not alter ATP binding in site 2 by determining a crystal structure of W28A 2 with ATP as the only effector at 2.6-Å resolution (Table S1, S3). We find that the W28A substitution results in a more open pocket for ATP binding at site 2, but no structural perturbations in the bound ATP molecule are apparent 300 ( Fig. S6E,F,G,H). All site 1 residues and site 2 residues F87 and F97 are identical to WT 2-(ATP)2 ( Fig. 7, S6E,F,G,H). Therefore, the W28A substitution appears successful in uncoupling nucleotide binding in site 2 from the identity of the nucleotide in site 1 without substantially altering ATP binding to site 2.
305
Although not the focus of this report, the W28A 2-(ATP)2/CDP structure at 2.6-Å resolution structure also provides a higher-resolution picture of the active site bound to the CDP/ATP substrate/effector pair (Fig. S7). The hydrogen bonding pattern that affords substrate specificity is the same as that described previously (Zimanyi et al., 2016).
315
Substitutions W28A, F87A, and F97A at the second ATP binding site disrupt activity regulation. With the knowledge that the W28A substitution uncouples nucleotide identity at site 1 from that of site 2, we proceeded to investigate the relevance of site 2 to allosteric regulation of activity. If site 2 is not important in either dATP-induced activity down-regulation or ATP-induced 320 activity up-regulation, then the W28A RNR variant should behave like WT protein. As controls, we also investigated an F87A RNR variant, which should also uncouple site 1 and site 2, and an F97A RNR variant that should not lead to site 1, 2 uncoupling, but would be expected to decrease the affinity of ATP binding in site 2 due to loss of the stacking interaction of the adenine base with the F97 side chain (Fig. 4). Our results described below indicate that site 2 is, in fact, critical for 325 the ability of RNR to be regulated by the ratio of dATP to ATP, but is not important for activity under ATP-only conditions, and is not important for the ability of dATP to down-regulate RNR when no ATP is present (Fig. 8). In particular, our data show that under activating conditions (3 mM ATP as specificity and activity effector), these three enzyme variants had activities comparable to wild-type 2 with CDP as substrate (Fig. 8A). In the presence of inhibitor dATP,
330
F97A was inactivated to a degree similar to wild-type 2 (5-10% of maximal), and the activity of W28A and F87A was reduced somewhat less (20-25% of maximal) (Fig. 8A). Overall activity regulation in these three mutants is thus only moderately perturbed when ATP and dATP are used as effectors in isolation.
340
RNR variants that are similarly active with 3.0 mM ATP (black bars) as WT, deactivate with 0.175 mM dATP like WT (white bars), although not as fully as WT, and unlike WT, re-activate when 3.0 mM ATP is added to 0.175 mM dATP (grey bars). (B) Specific activity of WT and the W28A variant RNR assessed for a wider range of ATP and dATP conditions. Whereas WT RNR (dark grey bars) is largely inactive when the dATP concentration is 0.175 mM or higher, regardless of the amount of activator ATP, the W28A variant (light gray bars) is very sensitive to the addition of ATP and shows activity even when ATP concentrations are ten-fold less than dATP concentrations. (C) Specific activity of WT and the W28A variant RNR in the presence of inhibitor dATP (0.5 mM) with activator ATP replaced by 1.0 mM GTP. The substitution of W28 for alanine appears to alter the specificity of site 2 for ATP such that GTP can now upregulate enzyme activity. dATP and ATP concentrations are shown below the graph. Error bars shown are standard 350 deviations.
To evaluate the activating effect of ATP in combination with dATP, we tested a condition containing 3 mM ATP and 0.175 mM dATP (Fig. 8A), which are physiologically relevant concentrations for E. coli. With 3 mM ATP and 0.175 mM dATP, wild-type 2 and F97A activity 355 are stimulated only slightly relative to 0.175 mM dATP alone, and remain low relative to 3 mM ATP alone. In contrast, W28A and F87A variants in 3 mM ATP and 0.175 mM dATP both recover near maximal activity under these conditions. This finding is consistent with the ability of ATP to bind to site 2 in W28A and F87A variants even when dATP is bound in site 1, and is consistent with the importance of ATP binding in site 2 to the up-regulation of RNR activity. Notably, our 360 structural data suggest that site 2 is restricted to ribonucleotides and these data support this conclusion. If dATP was binding to site 2 in either the W28A or F87A RNR variant, then these proteins would not show these WT-like levels of dATP inactivation. Therefore, site 2 does not appear to be important to the ability of RNR to be turned off by dATP, but is important for the ability of RNR to be turned on by ATP.
365
We further tested the W28A variant under different ratios and concentrations of dATP and ATP ( Fig. 8B) and found that the W28A variant is fully active in the presence of 1.0 mM ATP under extremely high concentration of dATP (0.5 mM or 1 mM). With a Kd of ~ 6 M for dATP binding (Brown and Reichard, 1969b;Ormö and Sjöberg, 1990) and ~120 M for ATP binding to site 1, 370 concentrations of 1.0 mM dATP and 1.0 mM ATP should be completely inhibitory, but W28A RNR is fully active. Activity does start to go down when the concentration of ATP is lowered to 0.1 mM and dATP is kept at 1.0 mM. Again, these data indicate the importance of site 2 to successful activity regulation by dATP and ATP.
375
Given the structural observation that GTP binds to site 2 in W28A-2, we investigated whether GTP can reverse dATP-inactivation, and it can (Fig. 8C). Under conditions where WT RNR is inactive (0.5 mM dATP and 1.0 mM GTP), W28A-2 is active, although not as active as W28A-2 with 0.5 mM dATP and 1.0 mM ATP. These results indicate that GTP binding to site 2 can reverse dATP-induced inhibition, but that GTP does not bind as well as ATP to this site. It is very 380 interesting that the presence of a ribonucleotide in site 2, regardless of identity, can restore RNR activity. Of course, under physiological conditions, the concentration of ATP is much higher than the concentrations of other ribonucleoside triphosphates (Bochner and Ames, 1982;Buckstein et al., 2008), such that binding of other NTPs in site 2 is unlikely to be relevant in vivo. This result does, however, serve to confirm the importance of site 2 for up-regulation of RNR activity. Again, 385 these data suggest that allosteric regulation by dATP and ATP is not just due to the difference of one hydroxyl group. The hydroxyl group is extremely important, but it is not the whole story. Both the canonical activity site (site 1) and the newly discovered site 2 are critical for allosteric regulation.
ATP binding leads to an increase in the helix 2 helicity, which in turn destabilizes the 44
structure. We next considered the molecular basis for 44 destabilization by the binding of two molecules of ATP by comparing 44 and 2-(ATP)2 structures. There are three changes in the 405 cone domain of note when comparing these structures. First, in the presence of the two molecules of ATP, the 9 KRDG 13 motif of the -hairpin of the cone domain is shifted upward ~2.3 Å (~6° to ~7º) at the tip relative to the pivot point at V7 between these two structures (Fig. 9A). As mentioned above, as compared to dATP bound in site 1, the ribose of the site 1 ATP and the ATP phosphates sit higher in the activity site, and the site 1 ATP phosphates now share a coordinating the position of K9 and R10 of the 9 KRDG 13 motif. R10 and K9 both shift up and K9 no longer contacts the adenine base of the nucleotide in site 1 (Fig. 3A,B), instead contacting a -phosphate of the site 2 ATP (Fig. 4B,C). As a result, the 9 KRDG 13 motif of the -hairpin of the cone domain shifts upward. This upward shift results in the second change of note: an approximately equal and 415 opposite downward shift (2.5 Å) of residues Y50-I53 near the base of the -hairpin (Fig. 9B). This downshift of residues Y50-I53, in turn, releases a strain on the residues at the end of helix 2, which can now fold up into an additional helical turn, which is the third change (Fig. 9B).
Consequently, helix 2 has an extra turn in the 2-(ATP)2 structure than it does in the dATPinactivated 44 structure (Fig. 9C).
420
Thinking about these conformational changes in the opposite direction, the order of events would be: dATP replaces ATP in site 1 and shifts the -hairpin down; residues Y50-I53 shift up in response; and helix 2 unwinds due to the strain. Thus, the -hairpin appears to act as a lever responding to the presence of dATP or 2 ATP molecules, alternatively pulling on and unwinding 425 helix 2, or relaxing and refolding helix 2 (Fig. 9C). This change in helicity also alters the positions of residues H46 and I47 dramatically. dATP binding unwinds helix 2, which creates a binding pocket for residues of the subunit, stabilizing an 44 structure. To understand the significance of helix 2 unwinding in 430 the present of dATP, we evaluated the − interface in the previously determined 44 structure (Zimanyi et al., 2016(Zimanyi et al., , 2012. We find that the unwinding of helix 2 creates a hydrophobic binding pocket for residue I297 of the subunit to bind, in which I297 packs against I47 (Fig. 9D). When helix 2 is fully wound, as it is in the apo structure (Fig. 9E, PDB: 1R1R (Eriksson et al., 1997)) and as it is in the α2-(ATP)2 structure (Fig. 9F), there is no binding pocket for I297 of between 435 helix 1 and 2. The turn of the helix 2 physically blocks the -binding site. Additionally, polar residue H46 is pointing toward the binding site for I297, making for an unfavorable interaction, and I47 is unavailable to make a favorable one as it is pointing is the opposite direction ( Fig. 9D-F). Thus, the unwinding of helix 2 of the cone domain of appears key for the formation of a binding site for the subunit, creating not only the room for to bind, but also swapping out an unfavorable 440 interaction (H46) for a favorable one (I47).
Considering the conformational changes in reverse: the binding of two molecules of ATP would destabilize the 44 inactive state by shifting the -hairpin lever up, releasing the strain on helix 2.
As helix 2 refolds, the hydrophobic pocket for residue I297 of is lost, and the favorable interaction with I47 is replaced with an unfavorable one (H46) to ensure 's departure. The overall interface is small, ~525 Å 2 , making it relatively easy for small changes to break and apart.
Although the above mechanism of helix 2 unwinding and re-winding in the presence of dATP and ATP, respectively, beautifully explains how the binding surface for the subunit can alternately 450 be exposed and tucked away, and all ATP-bound structures show helix 2 fully wound (Fig. S8), we were puzzled as to why our structure of 2-dATP, described above, did not show an unwound helix 2. The structure of dATP-bound 44 shows the unwound helix 2, but we would expect helix 2 unwinding to proceed binding, since helix unwinding appears to create the binding site. The lack of helix unwinding in the 2-dATP structure cannot be attributed to differential contacts made 455 by dATP, since as noted above, the same contacts are observed in the 2 structure as in the 44 structure (Fig. 3A, B). Thus, we investigated whether lattice contacts in the crystal might be restricting movement of the −hairpin in the 2-dATP structure, such that helicity of helix 2 would be unchanged, and in fact, residues at the base of the −hairpin (Y50-I53) are involved in lattice contacts (Fig. S8).
We therefore sought a different crystal form for 2 in which the -hairpin region of the cone domain is less restricted by lattice contacts. Finding new crystal forms for 2 has historically been problematic due to the fact that the 2 subunits are not very soluble in the absence of the subunits. In fact, the first crystal form of 2 was obtained through co-crystallization of the 2 465 subunit with a short peptide (residues Y356-L375) that contained the sequence of the subunit C-terminus (Uhlin and Eklund, 1994). With this in mind, we were able to obtain a new crystal form of 2 by fusing the 35 C-terminal residues of (342-376) to the C terminus of an 8-residue truncated (2-C35) (Fig. S9). The overall structure of this crystal form with dATP bound at 2.10-Å resolution (Table S1, S2) is essentially identical to that observed above for free 2; however, 470 the cone domain is slightly less restricted by lattice contacts near the -hairpin (Fig. S8E). With the restraints of the crystal lattice relaxed, helix 2 adopts the unwound conformation as in the 44 complex (Fig. 9G). The hydrophobic pocket vacated by H46 is left unfilled in this structure, and residues 19-27 (helix 1) and 35-43 (helix 2) that interact with 2 in the 44 complex are exposed to solvent. This structure thus demonstrates that specific contacts with 2 are not required for helix
Discussion
Allosteric activity regulation in enzymes often involves the movement of side chains in an active site in response to the binding of an allosteric effector in order to either increase or decrease 480 enzyme activity. With this in mind, the incredible oligomeric state change associated with E. coli class Ia RNR activity regulation is even more impressive (Fig. 1C). E. coli class Ia RNR's mechanism for allosteric regulation of activity can be described as a game of 2 keep away, preventing radical transfer and thus activity by keeping 2 at arms-length, trapped in a ring structure. But, how is it that the dATP binding stabilizes the trapped ring state whereas, ATP, a 485 molecule that differs only by a single hydroxyl group, allows the ring to fall apart, freeing 2. The work presented here suggests that the difference is one hydroxyl group and a whole second molecule of ATP and indicates that protein conformational changes that are involved in controlling ring stability are considerable. In fact, one could describe the molecular mechanism involved as the protein-equivalent of a Rube Goldberg machine.
490
A key component of this Rube Goldberg machine is H59. Previously, Sjöberg and co-workers showed that a H59A variant was not able to discriminate effectively between dATP and ATP and implicated H59 in triggering allosteric regulation (Rofougaran et al., 2008). Studies on mammalian RNR, and the equivalent residue (D57), have additionally showed the importance of this residue 495 in allosteric regulation (Caras and Martin, 1988;Reichard et al., 2000). Now, through this work, we can explain how H59 communicates the presence of dATP or ATP and triggers the appropriate response in the E. coli class Ia RNR enzyme. We can now also describe that response, which we find involves three sets of conformational changes: the H59-triggered forming/unforming of allosteric effector site 2; the upward/downward tilting of the -hairpin and accompanying 500 relaxing/tugging of helix 2; and the winding/unwinding of helix 2 and accompanying sealing/revealing of the subunit binding pocket on . domain: H59 hydrogen bonds to the ribose (R) hydroxyl group of dATP (yellow); site 2 is blocked by Phe87 and Trp28; β-hairpin (black) is anchored down via contacts between dATP and R10 and K9; and helix 2 (green) is unwound, creating binding surface for . (B) ATP-bound cone domain: H59 does not engage in hydrogen bonding to the site 1 ribose (R) of ATP (pink) and H59 is tilted; F87 and W28 have moved and site 2 is open with ATP (grey) bound; the β-hairpin (black) is pushed up due to presence of 2ATPs; the strain on helix 2 (green) is decreased and helix 2 is rewound. (C) The binding surface for β (orange) is alternatively created by dATP (yellow) pulling the β-hairpin (black) down, and hidden by 2 molecules of ATP pushing the β-hairpin (black) up, which unwinds and rewinds helix 2 (green) respectively.
In terms of stabilizing the 44 inactive ring-like state, H59 contributes by communicating the 515 presence of dATP to Phe87 and Trp28 such that they adopt positions that block allosteric effector site 2 (Fig. 10a, Fig. S5). With allosteric effector site 2 closed for nucleotide binding, dATP in site 1 anchors the tip of the -hairpin down by contacts made to R10 and K9. With the tip down, a strained helix 2 is unwound, and the binding surface for the subunit is stabilized (Fig. 10a,c).
520
The molecular mechanism by which ATP frees the subunit from the 44 ring is surprisingly elaborate. With RNR turned off, the ratio of ATP to dATP will increase, and ATP, despite its lower affinity for allosteric effector site 1 (Kd of ~120 M), will displace dATP (Kd of ~ 6 M) (Brown and Reichard, 1969b;Ormö and Sjöberg, 1990). Due to the extra hydroxyl group on the ribose of ATP, the ribose sits higher in allosteric effector site 1 to prevent a steric clash between the 2' hydroxyl 525 group and I22 (see Fig. 3F). This repositioning of the ribose, which has also been reported for human RNR (Fairman et al., 2011), breaks the hydrogen bond between H59 and the 3' hydroxyl group. The side chain of H59 tilts to one side.
The tilting of the H59 side chain starts a Rube Goldberg-like mechanism in which H59 movement 530 results in the movement of the side chain of F87, which in turn results in the movement of the side chain of W28. With the side chains of F87 and W28 repositioned, the second ATP can bind to previously unavailable site 2 (Fig. 10b). With six ATP phosphates now positioned around one Mg 2+ ion, the -hairpin tip is pushed up, and with the -hairpin lever tipped up, the strain on helix 2 residues is decreased and helix 2 re-winds, sealing away I47 and the binding pocket for the 535 subunit (Fig. 10c). Ensuring the departure of , helix 2 re-winding swaps hydrophobic residue I47 with a polar and potentially charged H46. Thus, the Rube Goldberg mechanism concludes with an exchangeable protein surface that alternatively attracts (I47-outward facing) and repels (H46outward facing) the subunit. Consistent with the ability of an Ile-to-His exchange at a small interface (~525 Å 2 ) to disrupt that interface, is the previous observation (Chen et al., 2018) that 540 site-directed single substitutions of residues at this interface (e.g. L43Q, S39F, E42K) abolish ring formation.
It is quite impressive from a protein design prospective that a 95-residue domain has two "hideaway" binding sites, one for a nucleotide effector (site 2) and the other for a protein subunit, 545 and that these sites are in communication. The exposure of one binding site necessitates that the other is sealed. When ATP is bound and site 2 is exposed, the -subunit binding site is tucked away. Conversely, when dATP is bound and site 2 is tucked away, the -subunit binding site is exposed. Importantly, we were able to demonstrate that site 2 is relevant to allosteric activity regulation through a W28A substitution. Although dATP inhibits W28A RNR in the absence of any 550 ribonucleoside triphosphate, addition of ATP or GTP restores activity for W28A RNR under conditions that are inhibitory for the WT protein. In other words, when site 2 is open, the identity of the nucleotide in site 1 does not matter. Thus, like the H59A RNR variant (Rofougaran et al., 2008), W28A RNR does not discriminate between dATP and ATP. Although unexpected, the involvement of two ATP binding sites in the cone domain of E. coli class Ia RNR has a certain 555 structural and chemical logic to it; when ATP levels rise in response to RNR inactivity, that the binding of two ATP molecules per subunit, rather than one ATP molecule, are required to shift the conformation equilibrium from 44 to 22.
It is too early to say whether human class Ia RNR will utilize one or two ATP molecules in its 560 allosteric regulatory mechanism. W28 is not conserved in human RNR and there is no evidence for the unwinding of helix 2 in the formation of the − interface of the dATP-inhibited 6 ring . D57 (equivalent of H59) appears to be responsible for signaling the presence of dATP versus ATP Caras and Martin, 1988;Reichard et al., 2000), but the molecular response that follows is unknown. Given that the interface involved in 565 formation of inactive rings is different in human RNR (−) than in E. coli (−), we do not expect the molecular mechanism to be the same. This difference in the nature of the inhibited states is exciting and has potential application for selective RNR inhibitor design. FDA-approved drug hydroxyurea and prodrugs gemcitabine and clofarabine either inhibit human RNR and not E. coli or both human and E. coli class Ia RNR. There are no FDA approved inhibitors that are specific 570 for bacterial RNRs.
Molecules that stabilize the inactive ring structures of RNR would be expected to be successful RNR inhibitors. Notably, clofarabine triphosphate, which is used in the treatment of pediatric acute leukemia (Pession et al., 2010), inhibits human RNR with the generation of "persistent hexamers" 575 (Aye et al., 2012) and does not inhibit the 44-forming E. coli class Ia RNR. Recently, we showed that the class Ia RNR from N. gonorrhea (NgRNR) forms 44 inactive rings that are analogous to the rings formed by the E. coli enzyme, and that compounds that inhibit NgRNR are not crossreactive with human RNR (Chen et al., 2018). Although there is no structure of NgRNR bound to these compounds, we do know that N. gonorrhea develops resistance to them when mutations occur in NgRNR at the − interface of the 44 ring, suggesting that these compounds target an inactive ring structure (Chen et al., 2018).
The work presented here should aid in the development of compounds that target, stabilize and thereby increase the lifetime of inactive 44 ring structures of bacterial RNRs. In particular, our 585 studies suggest that small molecules that prevent site 2 from opening, by blocking W28 from moving, for example, would stabilize the inactive 44 ring structure. The inactive ring structure should also be stabilized through maintenance of the -hairpin in the downward position or maintenance of the unwound conformation of helix 2. In contrast, compounds that stabilize an open site 2 would be expected to impede down-regulation by dATP and lead to persistently active 590 bacterial RNRs. We hope that the molecular information presented here will facilitate the development of new antibiotic compounds. Antibiotic resistance is an eminent threat (CDC, 2018; Willyard, 2017) and RNR is a largely unexplored target to address this threat.
For the structures of wild-type 2 bound to dATP and ATP, high-purity 100 mM solutions of ATP and dATP were purchased from USB Corporation or Invitrogen. A high-purity 100 mM solution of dGTP was purchased from USB Corporation for ultrafiltration assays.
610
Construct and protein preparation. Untagged 2 and 2 were prepared as previously described (Salowe et al., 1987;Salowe and Stubbe, 1986). The concentrations of 2 and 2 were determined using ε280 of 189 and 131 mM -1 cm -1 , respectively; unless noted otherwise, all molar concentrations correspond to the subunit dimer.
For His6-W28A-2, an estimated ε280 of 182 mM -1 cm -1 was used to determine the final protein concentration.
620
The 2-C35 fusion construct was made by sequential megaprimer mutagenesis (Xu et al., 2003) in which double-headed PCR primers were used to amplify a 144 bp insert encoding the C-terminal tail of (the C-terminal 35 residues) from a plasmid encoding wild-type . These megaprimers were subsequently used to amplify the wild-type His6--encoding plasmid, while excising the final eight residues of the C-terminal tail. The final construct contains an N-terminal His6-2 (Minnihan et al., 2011). As the resulting construct has no additional tryptophan or tyrosine residues, ε280 of 189 mM -1 cm -1 was used to determine the final protein concentration. Crystallization of W28A-2. Crystals of W28A-2 were identified in sparse matrix trays as described for wild-type 2 above. For screening and optimization of the ATP/CDP co-crystal complex, His6-W28A-2 at 60 µM in assay buffer (50 mM HEPES pH 7.6, 15 mM MgCl2, 1 mM EDTA) was pre-incubated with 10 mM ATP and 1 mM CDP for 20 min at ~25 ºC before mixing X-ray data collection. Diffraction data for wild-type 2-dATP and 2-ATP were collected at the Advanced Photon Source (APS) on beamline 24ID-C on a Quantum 315 CCD detector at 680 100 K. The W28A-2-(ATP)2/CDP, W28A-2-(dATP/ATP), W28A-2-(dATP/GTP), and wild-type 2-C35-dATP datasets were collected at APS beamline 24ID-C on a Pilatus 6M detector (Dectris) at 100 K. Diffraction data were indexed, integrated, and scaled using HKL2000 (Otwinowski and Minor, 1997), with statistics shown in Table S1.
Crystallization of wild-type
Structure solution and refinement. The method of structure solution is described 685 individually for each structure below. Model refinement statistics for wild-type 2 structures are found in Table S2 and model refinement statistics for W28A 2 structures are found in Table S3.
For all structures, multiple rounds of refinement were performed with phenix.refine (Adams et al., 2010) in the SBGRID software package (Morin et al., 2013). For all structures, refinement consisted of rigid body, positional, and individual B factor refinement. Translation-libration-screw 690 (TLS) B factor refinement was used for all structures except W28A-2-(ATP)2/CDP. Manual rebuilding and geometry correction was performed in Coot (Emsley et al., 2010). Simulated annealing composite omit maps calculated in Phenix were used to validate modeling of ligands in all structures. For structures with resolution of 3 Å or better, waters were placed automatically in Phenix with manual editing and placement in Coot (Emsley et al., 2010). Ligand restraint files 695 were obtained from the Grade Web Server (Global Phasing Ltd.). Coordination distances for Mg 2+ ions were explicitly defined at 2.1 Å with loose restraints. All structural figures were made in PyMOL v. 1.7 and v. 2.3.7 (Schrödinger, LLC). Refinement statistics for each final model are given in Table S2 (wild-type cone domain) and Table S3 (W28A mutants).
The 2-dATP co-crystal structure was solved by molecular replacement in Phaser (McCoy 700 et al., 2007) at 2.55-Å resolution using a single monomer (chain A) from a previously solved structure of 2 (PDB ID 3R1R) (Eriksson et al., 1997) as the search model. The 2-ATP co-crystal structure was solved by molecular replacement in Phaser at 2.62-Å resolution using the refined 2-dATP co-crystal structure with all ligands and waters removed. Since the crystal form of these two structures is essentially identical, the cross-validation sets for the 2-ATP structure was 705 preserved from the 2-dATP structure. For both structures, the resolution was extended to the full range after partial model building and refinement at lower resolution. CNS 1.3 (Brünger et al., 1998) was used for early stages of refinement. ATP or dATP and Mg 2+ ions were placed into omit density prior to addition of water molecules. In both 2-dATP and 2-ATP models, two chains are present in the asymmetric unit in a physiological dimer. Loose non-crystallographic symmetry 710 (NCS) restraints on both coordinate positions and B factors were used throughout refinement and then removed during the final rounds. Residues 5-736 (of 761) are present in each chain of both structures. 1-2 additional residues are visible in some chains at the N-terminus, but are poorly structured. The C-terminal 25 residues are thought to form a flexible tail essential for the rereduction of the active site disulfide upon turnover.
715
The W28A-2-(ATP)2/CDP structure was solved by molecular replacement in the Phenix implementation of Phaser (McCoy et al., 2007) with a 3.60-Å resolution cutoff. Due to the large changes in the overall conformation upon binding of the substrate CDP, the best search model was a single 2 dimer from the wild-type 44-dATP/CDP structure (PDB ID 5CNS) (Zimanyi et al., 2016). The initial Rfree for this molecular replacement solution was 0.41. The resolution was 720 extended to 2.60 Å after rigid body refinement and manual rebuilding of the model. Simulated annealing and real-space refinement were used early on until the model converged. Eight 2 dimers are present in the asymmetric unit, organized as two separate dimer-of-dimer units. A stable 4 oligomeric state has never been observed in solution for E. coli class Ia RNR, and analysis of the overall structure by the PISA server suggests that the only stable assembly is the 725 2 dimer and not 4. The overall structure of the subunit's (/)10 barrel closely resembles that observed in the substrate/effector-bound 44 complex, despite the complete absence of 2 from this crystal form. The cone domain was partially deleted and rebuilt manually during refinement.
ATP and Mg 2+ ions were placed into omit density. NCS restraints were used throughout refinement. Composite omit maps were used to verify the final structure, especially to ensure model bias did not influence the rebuilding of the cone domain. The N-terminal His6 tag and thrombin cleavage site and residues 737-760 at the C-terminus are disordered in all chains.
Residues 645-652, which form a flexible -hairpin, are poorly ordered in four of the eight chains and have been omitted where there is no clear density.
The W28A-2-dATP/ATP and W28A-2-dATP/GTP structures were solved by molecular 735 replacement in Phaser (McCoy et al., 2007) with the wild-type 2-(ATP)2 structure as the model with no resolution cutoff. This crystal form contains lattice contacts that are similar to those observed in the wild-type 2-ATP and 2-dATP structures, but four molecules are present in the asymmetric unit instead of two, and the length of the b-axis is increased by 20 Å (a ~17% increase). The C-terminal tail of (24 residues) is disordered in all of the chains of each structure 740 along with 3-4 residues at the N-terminus, and the N-terminal His6 tag and thrombin cleavage site.
There is no clear reason for the change in space group that is apparent from the crystal packing, but it is possible that the presence of the N-terminal His6 tag disrupts some crystal contacts in this crystal form, although the N-terminus is not ordered in either W28A or wild-type free 2 structures.
To prevent overfitting, strict NCS for both coordinate positions and B factors was maintained 745 throughout refinement. All ligands were removed prior to molecular replacement and were rebuilt based on omit maps. Due to the low resolution and similarity of the starting model, few structural changes were observed during refinement. dATP was placed in the specificity site based on omit maps. ATP/dATP and GTP/dATP were placed according to the positions of the ATP/ATP pair in the wild-type 2-(ATP)2 structure. To assess whether the structure was phase-biased by the 750 choice of 2-(ATP)2 as the molecular replacement model, 2-dATP was also tested. The molecular replacement solution was substantially worse in this case, but could be corrected with rigid body refinement of the hairpin and rotamer flips of W28 and F87. Composite omit maps were used to verify ligand placement, the conformation of F87, and the W28A mutation. The N-terminal His6 tag and thrombin cleavage site and residues 737-760 at the C-terminus are disordered in all 755 chains. All other residues are present in all four chains of both structures.
The 2-C35-dATP/CDP structure was solved to 2.10-Å resolution by molecular replacement in Phaser (McCoy et al., 2007) using an 2 structure that contained two bound peptides that mimicked the sequence of the β tail (PDB ID 1R1R) (Eriksson et al., 1997) with no resolution cutoff. The first 20 residues of 2 were not included in the search model. The initial Rfree 760 for this starting model was 0.29. The crystal form contains two molecules in the asymmetric unit.
Only one of the two cone domains is fully structured (beginning after the N-terminal His6 tag and thrombin cleavage site); the other cone domain has no density for residues 1-19 and 47-58 due to a close crystal contact. Residues 738-753 of the tail and 342-364 ( numbering) of the attached tail are disordered, but the remainder of the tail, 365-376 ( numbering), renumbered 765 as 1365-1376, is bound as previously observed for both the peptide in the 2 structure and for the tail in the 44 complex (Eriksson et al., 1997;Zimanyi et al., 2016). No NCS restraints were used as there were substantial differences in the cone domains in the two protomers. dATP was modeled into the final structure based on omit maps. At both specificity and activity sites, a Mg 2+ ion coordinates three phosphate oxygens of dATP and three water molecules. Composite omit 770 maps generated in Phenix (Adams et al., 2010) were used to verify the cone domain conformations and effector ligands.
Determination of equilibrium ATP binding parameters by ultrafiltration. The ultrafiltration method described by Ormö and Sjöberg (Ormö and Sjöberg, 1990) wash, the sample consisting of 7-20 μM 2 (in assay buffer with 5 mM DTT) and 50-1000 μM 3 H-ATP with a specific activity of either 690 cpm nmol -1 or 3090 cpm nmol -1 in a total volume of 150 μL was added to the filter. The solution was equilibrated in a 25 °C water bath for 5 min. and a 25 μL aliquot was taken for determination of total nucleotide concentration. The sample was then centrifuged at 12,000 × g for 1 min. 25 μL of the filtrate was then taken to determine free 785 nucleotide concentration by scintillation counting. To isolate the binding events at the activity site, 100 or 500 μM dGTP was included in the sample before equilibration. The amount of bound nucleotide was found by subtracting the amount of free nucleotide from the total nucleotide. Data were plotted as a saturation binding curve and analyzed using non-linear regression and a onesite specific binding model in the program Prism (GraphPad).
795
Standard active and inactive conditions contained either 3 mM ATP or 175 µM dATP as allosteric effectors, respectively. Concentrations of nucleotides used in titration experiments are given in the Fig. 8 legend. Linear fitting of the initial rates was performed in the Cary WinUV Kinetics program (Varian/Agilent) and data were plotted in MATLAB (Mathworks).
940
(A) Substrate specificity is regulated allosterically via the binding of deoxynucleotides. Under low concentrations of dATP, CDP and UDP reduction is favored. TTP in turn leads to GDP reduction and dGTP leads to ADP reduction. When the concentration of dATP increases, it can bind to a second allosteric site, the activity site, which inhibits enzymatic activity. ATP can compete for binding at this activity site to restore activity. (B) Enzymatic turnover in class Ia RNR involves a series of redox-active cysteine pairs. The 945 reducing equivalents for nucleotide reduction are initially provided by a pair of cysteines (C225/462) in the active site that is oxidized to form a disulfide concomitant with product formation. This disulfide is reduced by a second pair of redox-active cysteines (C754/759) found at the C-terminus of the 2 subunit. Ultimately, the disulfide between C754 and C759 is reduced via the thioredoxin/thioredoxin reductase pair together with NADPH, thus allowing for additional rounds of turnover. | 11,970 | 2021-07-31T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Phylogenetic Position of North Sulawesi Tarsius sp. Based on Partial Cytochrome b Gene Sequences
Cyt b gene of North Sulawesi Tarsius sp., T. tumpara, T. sangirensis and T. tarsier (T. spectrum) had been partially sequenced. The homologous sequence of three groups had been compared to describe the phylogenetic position among them, as well as several other species accessed from the Genbank. Total DNA extracted from the muscular tissue had been obtained through tail cut sampling using the innuPREP DNA micro kit, and amplified using a pair of universal primer, L14841 and H15149. The size of the cyt b gene sequence amplified was 307 bp long. Sequence aligned using CLUSTAL-X program and diversity analysis were done using version 5.2.2. MEGA5 program. Genetic distance had been calculated by Tamura 3 parameter method and phylogenetic tree had been built using Neighbor-Joining and Maximum Likelihood methods. Genetic distance based on cyt b gene nucleotide was found from 0 to 0.240 with an average of 0.080. The phylogenetic tree constructed by Neighbor Joining and Maximum Likelihood methods indicated that T. tarsier, T. sangirensis and T. tumpara were closely related with Tarsius tarsier-complex, and distantly related with Cephalopachus bancanus and Carlito syrichta. The genetic distance and the phylogenetic tree had been constructed on the base of partial cyt b gene sequence of T. tarsier, T. sangirensis, T. tumpara and 5 other tarsier species and their accession. Those results are consistent with taxonomy based on morphology and vocal acoustic form.
Introduction
The main island of Sulawesi and its surrounding islands located at Wallacea zone, possess abundant biodiversity.
In the past, Sulawesi Island did not join with any other land [1].Isolated condition in long period of time seemed to push evolution of many species so that Sulawesi Island had a high endemism level [2].Therefore, the main priority in the management of natural resources in Sulawesi is conserving a resource of genetic diversity, because species conserved in the Sulawesi are not found in other places [3].Sulawesi has become "top hotspots" for biodiversity conservation [4].Sulawesi has about 529 endemic vertebrates or approximately 1.9% of endemic vertebrates in the world [2], including 7 species of endemic monkeys [5] and 7 -9 Tarsius species of Tarsius tarsier-complex [6].Recently, they are threatened to distinct due to natural predators, illegal hunting, and habitat destructions by human intervention.
Tarsius taxonomy up to now remains a problem continuously debated.Originally, tarsier was a monotypic genus of Tarsiidae family [7].Based on morphological characteristics, tarsier consists of two different groups, western tarsier and eastern tarsier [8]; but based on genetic analysis and vocal acoustic form, tarsier is divided into three groups, western tarsier living in great Sunda Islands (Borneo and Sumatera), eastern tarsier living in Sulawesi and Phillippine tarsiers living in southern Phillippine [6].Nowadays, the genera of Tarsiidae family had been revised to be three separate genera, Tarsius, Cephalopachus, and Carlito, each of which is allopatrically distributed in three different biogeographic regions, Sulawesi, great Sunda Islands (Borneo and Sumatera) and Mindanao [6].
Sulawesi tarsier grouped in one group called Tarsius tarsier-complex, comprises of 9 species: T. sangirensis, T. tumpara, T. wallacei, T. lariang, T. pumilus, T. fuscus, T. tarsier (T.spectrum), T. pelengensis and T. dentatus [6] [9].Tarsius tarsier complex is a vague taxonomic group due to consisting of closely related species, so that it is less distinct to recognize the interspecific diversity based on the morphological variations.The taxonomic status of Sulawesi tarsier, especially of low land tarsier, has been long disputed due to interspecific similarity.The low land tarsier species has no significant difference in body size or body proportion [9].
Study on genetic diversity and population biology had been carried out using mitochondrial DNA [10] [11].Variation patterns of mtDNA can be used as species distinction as well as species investigation of those endangered species [12].The mitochondrial DNA is often used as genetic marker for closely related interspecific and intraspecific genetic diversity study, since it evolves faster than nucleic genes so that it could give more variations to reconstruct the evolutionary history [13].Mitochondrial DNA genes had been widely applied to estimate the phylogenetic relationship of primate [14]- [16].These genes had also been used to restudy the phylogenetic relationship among closely related taxa [12].The mitochondrial cyt b gene is known as a gene evolving faster, so it can have more variation and can be used for phylogenetic and biogeographic study [16]- [18].
Whole mitochondrial genome studies had been conducted on T. syrichta [19], T. bancanus [20], T. wallacei [21], T. lariang [22] and T. dentatus [23], so that these results can be used as references to study several genetic markers of encoding or noncoding genes in the mitochondria.The cyt b gene contain discrete character groups (base position at codon) representing the mutation rate, so that it can be used as a phylogenetic marker [24].
Partial sequencing of the cytochrome b gene had been used to uncover the phylogenetic position and the genetic relationship among several tarsier species and among other primates [25] [26].The cyt b gene is one of the protein coding gene of the mtDNA.Its product is cytochrome b (cyt b) apoenzym identified as the central catalytic subunit of Q cycle.The cyt b gene of tarsier has a length of 1140 bp, known as coding region of the protein located at 14169 to 15308 mtDNA sequence, flanked by the tRNA-Glu gene and the tRNA-Thr gene [20].
Several universal olygonucleotide primers had been developed for amplificating and sequencing cyt b gene of different animals [27].Partial sequence of the cyt b gene had been applied to uncover the genetic diversity among T. sangirensis, T. tumpara and T. tarsier (T.spectrum or Manado tarsier) and several other Tarsius sp.obtained from the GenBank.
Sample Collection and Treatment
Sample specimen used was muscular tissue obtained by tail cut sampling.It was then stored at -20˚C, in alcohol of 95%.Because of difficulty of collecting samples, each species consisted only 2 sample specimens.The sampling site and the specimen treatment is shown in Table 1.
The data sequence of C. bancanus, C. syrichta, T. wallacei, T. dentatus, T. dentatus × lariang and T. lariang had been obtained from the GenBank.Tarsier species and their accession numbers taken from GenBank are shown in Table 2.The primer used was universal primer L14841 and H15149 [27].The specific location of the primers, L14841 (33 bp) and H15149 (34 bp) in the cyt b gene region was from the 63st nucleotide sequence downstream related to the forward primer and from the 435st nucleotide sequence upstream related to the reverse primer.
DNA Extraction and Cyt b Gene Amplification
Total DNA had been extracted using InnuPREP DNA micro Kit.Purity measurement of total DNA isolated of the sample resulted in a concentration of 46 -187 µg per gram sample with λ260/λ280 ratio.
The PCR component and condition were optimized so that the cyt b gene could be amplified (Table 3 and Table 4).
Cyt b Gene Sequencing
The amplification product had been sent to First BASE, Laboratories Sdn.Bhd.Selangor, Malaysia to be sequenced.The equipment used was ABI PRISM 3730xl Genetic Analyzer.Biosystem USA.
Sequence Alignment and Data Analysis
The cyt b gene sequence data of the North Sulawesi Tarsius sp. and those accessed from the GenBank had been aligned using CLUSTAL-X programme [28].Bayesian Information Criterion (BIC) had been used to consider the best substitution pattern.The genetic distance had been analyzed using the TN 93 + G (Tamura-Nei) and T92 + I (Tamura 3-parameter) methods.The phylogenetic tree had been constructed based on Tarsius sp.cyt b gene using two different approaches, Neighbor-Joining (NJ) [29] and Maximum Likelihood (ML) method [30] where C. bancanus and C. syrichta were treated as outgroup.Note: Before sequencing, the PCR product was purified and electrophorized in agarosa gel 1.5%.
Extraction and Amplification of cyt b Gene
Total DNA of six samples of North Sulawesi Tarsius sp. had been isolated and amplified.The results of electrophoresis on 1.5% agarose gel showed that cyt b gene amplified was at about 400 bp (Figure 1).
Sequence Characteristic
Multiple alignment of cyt b gene sequence of 307 bp long derived from T. sangirensis, T. tarsier, T. tumpara and those from homologous cyt b gene sequence of several tarsier species taken from GenBank indicates that invariable sites character as much as 72.97%, informative parsymony sites as much as 27.03% and variable sites as much as 27.03% (Table 5).
Nucleotide composition of the partial gene cyt b sequence of each tarsier species exhibits variations indicated by frequency difference of each bases among species.Average base frequency of T = 32.80%,C = 22.67%,A = 28.63%dan G = 15.90%.Analysis of several parameters of partial cyt b gene sequence (Table 6) found that nucleotide diversity (Pi) = 0.0698, total mutation = 97, ts/tv ratio (R) = 4.978 and ts/tv (k) ratio between purine bases = 5.253 and between pyrimidine bases = 13.418,respectively.
Genetic Distance
Genetic distance measured using the Tamura 3 parameters indicate that the value varies from 0 to 0.240 (complete matrix data are not included).The genetic distance of 0 is shown by sample pairing of the same species.based on distance of Neighbor-Joining (NJ) and Maximum Likelihood (ML).Both NJ and ML phylogenetic trees constructed based on the nucleotide sequences showed same tree topologies.Both NJ and ML phylogenetic tree put North Sulawesi tarsier, T. sangirensis, T. tumpara, and T. tarsier relatively at the same position at the tree.
Phylogenetic Tree
In NJ phylogenetic tree, both T. sangirensis and T. tumpara form a monophyletic clade and separate from the others Tarsius tarsier-complex (99% BR).On the other hand, T. tarsier forms a monophyletic clade and occupies position in Tarsius tarsier-complex clade (94% BR).In ML phylogenetic tree both T. sangirensis, T. tumpara also form a clade separated from the other T. tarsier-complex (98% BR).T. tarsier is located on Tarsius tarsiercomplex clade.In general, both the NJ and ML trees showed similarities in grouping populations, in which T. sangirensis, T. tumpara, T. tarsier and other species are clustered in a larger clade called Tarsier tarsier-complex.
Discussion
Prior classification of tarsier consisted of C. bancanus (T.bancanus), C. syrichta (T.syrichta) and T. tarsier (T.spectrum) [7].These three species are closely related [31].Recently, genera and species of the Tarsiidae family originally known as a monotypic genus, right now this family consists of three genera [9].These three genera are Chepalopachus, inhabiting the biogeographic region of Borneo and Sumatra, Carlito known as the Philippine tarsier and Tarsius known as east tarsier or Sulawesi tarsier or Tarsius tarsier-complex.Striking morphological characteristic differences of all three genera are mainly in the form of teeth, long legs and arms, tail end hair (tail-tuft) and mammary gland.Most Tarsius forms social group and has a duet song; Carlito does not form social groups in the wild but could socialize in captivity, as well as has no duet song, and Chepalophacus does not form social groups in the wild, as well as can not socialize even in captivity and has no duet song [9].Some species of Tarsius tarsier complex had separated from the other Sulawesi tarsier, because these species inhabit separate biogeographic region of Sulawesi mainland (See Figure 4).Both species of T. sangirensis and T. tumpara inhabit the archipelago biogeographic region known as the Sangihe archipelago.T. tumpara inhabits Siau island but T. sangirensis inhabits Sangihe island.Two islands are located at a distance of ± 60 km and restricted by the deep sea.T. sangirensis and T. tumpara are not allied with the Philippine tarsier C. syrichta, although their biogeographic regions are close to each other.In other hand C. syrichta is allied with C. bancanus, although their biogeographic regions are relatevely far apart.T. tumpara is subtly different from T. sangirensis, but both are significantly different from other tarsier of the Tarsius tarsier complex.Morphological features related to rare tail hair of T. sangirensis and T. tumpara resemble the features of Philippine tarsier C. syrichta.
Related to the phylogenetic posisition of T. sangirensis, T. tumpara and T. tarsier this research result supports the hypothesis that T. sangirensis, T. tumpara are the sister takson and allied with Tarsius tarsier-complex [32].Grouping T. sangirensis, T. tumpara together and grouping of T. tarsier to the large clusters of T. wallacei, T. dentatus and T. lariang are in accordance with distribution of Sulawesi tarsiers.
Related to the distribution of Sulawesi tarsiers, there were some distribution forms like Manado form, Libou form, Sejoli form, Tinombo form, Kamamora form, and Togian form [2]. The distribution of Sulawesi tarsiers had occured in part by events of Pleistocene vikarian and partly had been influenced by tectonic activities C. syrichta (T.syrichta) (NC012774) occured before the Pleistocene era [3].
Distribution of Sulawesi tarsier is closely related to that of vocalization forms (duet song).Distribution of vocalization forms of tarsiers of the northern and central parts of Sulawesi are in accordance with each species locality [33].There was a presupposition saying that the vocal forms of tarsiers were different among the tarsier population in the south and southeast Sulawesi, as well as in offshore islands of Selayar, Buton and Kabane region [34].The striking difference is related to their acoustic features; tarsier populations of south, southeast, and offshore islands of Sulawesi are classified as different species.Tarsier species having different acoustic duet song features are Tarsius dianae (T.dentatus), T. lariang, Togian tarsier, T. pelengensis.
This research result reinforces the fact that T. tarsier, T. sangirensis and T. tumpara are three distinctive species genetically, bioacousticly, morphologically as well as they have different distribution.This explanation is consistent with the hypothesis that speciation of tarsiers had occured as a result of the spread of proto-Sulawesi island, followed by various subsequent fragmentations [23].The phylogenetic positions of the North Sulawesi tarsiers uncovered by this research are based only on 307 nt sequence of the cyt b gene, but these results are in accordance with the several publications reported [9] [33] [34].To obtain more reliable results, the results of this study need to be examined again by cyt b gene sequence analysis as a whole, or to be examined using other applications of genetic markers.
Conclusions
Based on the cyt b gene partial sequence of North Sulawesi Tarsius sp., this result of research uncovers that T. tarsier, T. sangirensis and T. tumpara are closely related to Tarsius tarsier-complex, and relatively open related to C. bancanus and C. syrichta.This was supported by the genetic distance of each species uncouvered at NJ phylogenetic tree as well as at ML phylogenetic tree.
The positions of North Sulawesi tarsiers at the phylogenetic trees constructed based on the partial sequence of cyt b gene are in accordance with the classification based on morphology, distribution according to biogeographic region, as well as on distribution according to vocalization forms.
Genetic distance is shown by pairing of C. bancanus and other species, i.d. from 0.181 to 0.240, as well as pairing of C. syrichta and other tarsier species, with values from 0.181 to 0.200.Pairing among Sulawesi Tarsius sp. have the values 0 to 0.095.Overall mean distance is 0.080.These data indicate that those Sulawesi tarsiers are classified as closely related taxa and relatively distant related to the Borneo tarsier C. bancanus as well as to the Philippine tarsier C. syrichta.
Figure 2 and
Figure 2 and Figure 3 are phylogenetic trees based on nucleotides of partial cyt b gene constructed by method
Figure 2 .
Figure 2. Phylogenetic tree based on nucleotide sequences had been constructed by Neighbor-Joining method.NJ-1000 BR, substitution models: Tamura-Nei model, (TN93 + G).Numbers at the branches are bootstrap values.Note: species marked with an asterisk are species sampled.
Figure 3 .
Figure 3. Phylogenetic tree based on nucleotide sequence had been reconstructed by Maximum Likelihood (ML), Bootstrap 1000.Substitution Model: Tamura 3-parameter model (T92 + I).Branch Swap Filter: Very Strong.Numbers at the branches are bootstrap values.Note: species marked with an asterisk are species sampled.
Figure 4 .
Figure 4. Distribution Map of Tarsier Found in Sulawesi.This figure modified from Groves and Shekelle, 2010.
Table 1 .
Sampling and specimen treatment.
Table 2 .
Tarsier species and accession numbers were taken from GenBank.
Table 3 .
Optimization component of PCR.
Table 5 .
Summary of cyt b gene sequence diversity.
Table 6 .
Analysis results of cyt b gene partial sequencing.
Note: Analysis of base frequency only involved 306 bp, adjusted to the reading frame encoding amino acids, started from second nucleotide of each sequence amplified. | 3,757.6 | 2014-07-18T00:00:00.000 | [
"Biology"
] |
Optimization Design of Drilling Fluid Chemical Formula Based on Artificial Intelligence
Through the research and development of the regression prediction function of support vector machine, this paper applies it to the prediction of drilling fluid performance parameters and the formulation design of drilling fluid. The research in this paper can reduce the experimental workload and improve the efficiency of drilling fluid formulation design. The apparent viscosity (AV), plastic viscosity (PV), API filter loss (FLAPI), and roll recovery (R) of the drilling fluid were selected as the inspection objects of the drilling fluid performance parameters, and the support vector machine was used to establish a model for predicting the drilling fluid performance parameters. This predictive model was used as part of the overall drilling fluid formulation optimization design model. For a given drilling fluid performance parameter requirement, this model can be applied to reverse the addition of various treatment agents, and finally, the prediction accuracy of the model is verified by experiments.
Introduction
As a main tool, computer is introduced into the design and management of drilling fluid engineering. By combining computer technology with the thinking of drilling fluid experts, the level of drilling fluid design can be raised to a new level, and the design speed and quality will be greatly improved [1][2][3][4][5]. e development of the drilling fluid optimization design system can not only solve the problems in the traditional drilling fluid design but also has a more prominent feature that the computer system can store the design data for secondary use so that the experience accumulated in the previous design can be absorbed and the mistakes made in the past can be avoided in the new drilling fluid design process [6][7][8][9]. At the same time, the system can also output a unified design document. e research on the drilling fluid optimization design system, the establishment of a high-level drilling fluid database, and the development of efficient drilling fluid optimization design methods will contribute to the learning and promotion of successful drilling fluid design experience, the realization of integrated management of formulas, the improvement of information utilization, the integration of modern computer technology and drilling fluid design, and the realization of automation, standardization, and intelligence of drilling fluid design. e research of drilling fluid optimization design system can collect and popularize the successful drilling fluid design case experience summarized by the previous drilling, guide the new technicians to conduct drilling fluid design, and continuously promote the improvement of drilling fluid design technology.
Based on the research of case-based reasoning technology, rule-based reasoning technology, and support vector machine regression prediction technology, this paper also realizes their fusion reasoning. It not only avoids the disadvantages of each reasoning model operating in isolation and cannot fully apply the relevant conclusions in each other's reasoning to improve the reasoning success rate but also realizes the complementary advantages of each reasoning model and improves the design success rate of the system.
The Concept of Support Vector Machine
compromise between model complexity (learning accuracy for a specific training sample) and learning ability (ability to identify random samples without error) based on limited sample information to expect the best generalization ability. e most significant difference between it and the neural network is that it only needs to build a support vector machine model based on limited training samples by mining the corresponding relationship between the input and output data, to realize the prediction of unknown data. Support vector machines not only perform well in processing language, text, face recognition, etc., but also achieve good results in regression, such as using logging data to predict formation porosity and reservoir properties in the field of well logging [10][11][12][13]. Support vector machine is influencing various areas of machine learning through this new method of intelligent machine learning. Support vector machines originated from solving classification problems. e support vector machine introduces an insensitive loss function to solve the regression estimation problem of linear and nonlinear systems, which also achieves the same effect as the classification problem. Based on the principle of the support vector machine, this section will gradually explain the regression prediction principle of the support vector machine in detail.
Basic eory.
e basic idea of statistical learning theory is to estimate limited or small-scale sample data, mainly to study the relationship between experience minimization and empirical risk, expected risk, and how to seek new learning methods and principles based on existing ones. Statistical learning theory has apparent advantages in studying the learning laws of limited samples. It also effectively avoids the shortcomings of traditional statistical theory that quickly make the model fall into the local minimum due to overfitting and too many dimensions. Its progressive nature makes statistical theory develop rapidly under the efforts of many researchers [14,15].
An essential concept in statistical theory, the Vapnik-Chervonenkis dimension (VC dimension), can measure the generalization ability of the model trained by the support vector machine [16][17][18][19]. Under limited training samples, the larger the VC dimension of the learning machine, the more complex the learning machine will be, and the larger the confidence interval will be, which will eventually lead to a larger gap between the actual risk and the empirical risk, which means the model is more generalizable.
If there is a sample set with n data samples, which can be separated by a function set in all possible 2 n ways, then the function set is said to be able to break up the sample set with n samples. erefore, the VC dimension of the indicator function set is the maximum number of sample sets that can be broken up. In short, if there are n samples of sample sets that this function set can separate, and this function set cannot separate n + 1 samples of sample sets, the dimension of the function set is n. In particular, if a corresponding function set can be found to separate the sample set of any number of samples, then the VC dimension of this function set is infinite. e VC dimension of the general function set can be defined based on the indicator function VC dimension. e basic principle is to define a threshold to convert a real-valued function into a binary indicator function.
Besides VC dimension theory, structural risk minimization is the second factor that has a great impact on machine learning. To achieve better generalization ability in machine learning, the traditional theory reduces the empirical risk to make it reach the minimum value. Based on statistical learning theory, it is found that the quality of generalization ability is also related to the VC dimension, which is used to narrow the confidence range. Since there have been many shortcomings in the past in relying on empirical risk to evaluate the generalization ability of learning machines, Vapnik et al. proposed the method of applying structural risk minimization to solve this problem when they studied support vector machines. e basic idea of structural risk minimization theory is to arrange the function set into a sequence of subsets in order of VC dimension size and then minimize the actual risk by calculating each subset's empirical risk and confidence range.
One of the ideas to achieve structural risk minimization is to design a particular structure of the function set so that each subset can achieve the minor empirical risk (such as making the training error 0) and then select the appropriate subset to minimize the confidence range. e function that minimizes the empirical risk in this subset is the optimal function. e support vector machine method is a concrete realization of this idea.
Classification.
In the period of popular application of neural network systems, some scholars began to study the machine learning method with limited samples and first proposed the theory of statistical knowledge [20]. With the continuous progress in machine learning, new approaches are emerging. At the same time, it has been found that neural networks also have some drawbacks in dealing with practical problems, such as overlearning, underfitting, the curse of dimensionality, and falling into a local minimum. It is also not suitable for small samples of drilling fluid experimental data. With the continuous efforts of researchers, support vector machine theory has been paid more and more attention and developed rapidly with its unique processing methods for limited sample problems, nonlinear problems, and high-dimensional recognition problems.
In the early days of the emergence of support vector machines, it was considered that only two classification problems could be handled. Its basic idea was to find an optimal classification hyperplane to divide the data samples. Later, as classification requirements increased, support vector machines were developed to handle multiclassification problems [21][22][23]. e classification problem theory will be introduced in the following.
Suppose there are linearly separable samples, as shown in the formula given below: Since the sample is linearly separable, it can be expressed as y � +a or y � −a. If x i belongs to the first category, y � +a; otherwise, y � −a.
e basic idea of the support vector machine classification machine is to introduce a classification plane to separate the two samples as accurately as possible. If the classification plane found can completely separate the two types of samples and produce the most significant classification distance, then this plane is called the optimal separating hyperplane. e optimal separating hyperplane is expressed as follows: Since these two types of samples are linearly separable, they satisfy the relationship of formulas (3)- (4): In the formula, ω · x i are the inner product of two vectors. If the values of ω and b are appropriately adjusted, then the support vector that satisfies the formula (3) and is the closest point to the hyperplane (the point that falls on the two dashed lines) can be calculated.
According to the definition of the optimal separating hyperplane, its decision function is obtained as shown in the following formula: Convert the optimal hyperplane into a quadratic programming problem solution, as shown in formula given below: e method described above is where the data samples are linearly separable. Still, if the vector distribution is linearly inseparable, then slack variables must be introduced to solve this problem [24][25][26]. e specific method is to take a positive number for the introduced slack variable, select a nonlinear mapping function ϕ(x), and convert the original problem from a two-dimensional to a high-dimensional space for processing so that the nonlinear samples can be linearly divided in the high-dimensional space.
To avoid the cumbersome inner product calculation in high-dimensional space, the concept of kernel function can be introduced to replace the internal product operation so that the calculation amount is no longer proportional to the space dimension, which significantly improves the calculation efficiency. is paper uses the radial basis function as the kernel function, so the nonlinear optimization classification method becomes Its corresponding dual form is as follows: From the KKT (Karush-Kuhn-Tucker) condition, we can get the formula given below: From formulas (7) and (8), algebraic formula for b can be obtained, which is given below: By bringing formula (8) into the support vector, b can be obtained, and finally, the classification function is obtained, which is given below:
Regression Prediction.
With the continuous expansion of the application scope of support vector machines in classification problems, people began to explore their application methods for regression prediction of problems [27,28]. In this section, the regression principle of the support vector machine will be described in detail.
In the support vector machine processing regression prediction problem, the value of the output result may cover the entire real number domain and is no longer as single as the classification problem. e most intuitive description of the regression prediction problem is that the support vector machine establishes the correspondence between the input data X and the output result Y through the given training samples and then uses this correspondence to predict the unknown data. At the same time, the model can be trained repeatedly so that the support vector machine has the selflearning ability.
During the training and learning process, the SVM finds a specific function, which enables it to find the correspondence between any input and the corresponding output data.
e loss function is defined in the support vector machine regression machine. In statistics, the loss function is a function to measure the loss and the degree of error. e more common applications are the Huber loss function, the quadratic loss function, and the insensitive loss function. Compared with other loss functions, the insensitive loss function has fewer support vectors, reducing the calculation amount, and is the most widely used.
Computational Intelligence and Neuroscience
Suppose there is a set of data sample sets (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ), . . . , (x n 2 y n ) , x i ∈ R n , y i ∈ R n , then the insensitive loss function selected in this paper is expressed as formula given below: Given a set of training samples (x k , y k ), k � 1, 2, 3 . . . n, the regression problem is establishing a function correspondence between x and y through the given training samples, y � f (x), which satisfies the minimum insensitive loss function. When the difference of y � f (x i ) between y i is less than the defined insensitive loss function ε, the error is not included in the loss function. e principles of linear and nonlinear regression will be introduced separately below.
Linear Regression Model of Support Vector Machine.
In linear regression [29][30][31], the insensitive loss function of a certain precision is defined to satisfy ε ≥ 0 and relaxation factors ξ k ≥ 0ξ * k ≥ 0 and parameter C are introduced (penalty factor C meets C ≥ 0, indicating the degree of penalty for samples exceeding ε). e problem of the optimal hyperplane that is difficult to solve is transformed into an easy-toimplement quadratic programming problem. e objective function is as follows: e first term in the formula makes the function smoother and improves the model's generalization ability, and the second term reduces the model error. e introduction of the penalty factor c balances these two terms. After introducing the Lagrange multipliers, α,α * and Lagrange functions, (12) becomes: Solving the above Lagrange problem, the dual problem is obtained as follows: Solving the above dual problem, the optimal regression decision function can be obtained as follows:
Nonlinear Regression Model of Support Vector
Machine. e method of solving the nonlinear regression problem of the support vector machine is similar to the method of dealing with the nonlinear classification problem. By mapping the original nonlinear fitting data to a highdimensional space for calculation, for the training sample (x k , y k ), k � 1, 2, . . . , n, the nonlinear regression problem is transformed into the following model: is constrained optimization problem is solved using the Lagrange multiplier method, and a kernel function is introduced, which is defined as follows: Introducing this function to the solution of the dual problem, the SVM regression estimation function can be written as follows:
Support Vector Machine Kernel Function Selection and Parameter Optimization
3.1. Kernel Function Selection. Support vector machine is a machine learning method based on limited samples, and its generalization ability is highly related to the selected kernel 4 Computational Intelligence and Neuroscience function, kernel parameter, and penalty factor C. e kernel function realizes the nonlinear mapping of the sample data from the input space to the feature high-dimensional space. However, it is still impossible to establish a direct relationship between the parameters and the generalization ability of the learning machine. erefore, choosing the kernel function and parameters is a complex problem in the application field of support vector machines.
If a function can satisfy the Mercer condition, it can be used as a kernel function [25,29]. Currently, many scholars are devoted to the research of kernel function construction. Still, so far, there is no general method to determine the kernel function, so linear kernel (LK), polynomial kernel (PK), radical basis function (RBF), and sigmoid kernel (SK) are still generally selected in practical applications. As the representative of the global kernel function, the polynomial kernel is characterized by allowing the sample points far away from the fitting function curve to influence the kernel function's value significantly. e representative of the local kernel function is the radial basis function, characterized in that the samples with farther distances have less influence on the value of the kernel function.
Using the support vector machine of the drilling fluid optimization design system to predict the performance parameters of the drilling fluid, different kernel functions are used to predict the 15 groups of drilling fluid API fluid loss with other formulations. e results are shown in Figures 1-3. e support vector machine uses the Squared correlation coefficient to measure the model's prediction accuracy. e radial basis kernel function has achieved a high data prediction accuracy, as shown in Table 1. It is found that if there is no prior understanding of the regularity of the sample data, it is more reasonable to choose the radial basis function as the kernel function of the support vector machine.
Kernel Parameter Optimization Method.
Although the choice of the kernel function will lead to different prediction performances of the support vector machine, it is found that the selection of the kernel parameter has a more noticeable impact on the results in the practical application of the regression prediction of the support vector machine. In many cases, it plays a crucial role in the performance of the learning machine [28,30]. Many scholars have used random search algorithms to determine nuclear parameters. e generally recognized algorithms include the particle cluster algorithm, genetic algorithm, and ant colony optimization. Although these random search algorithms that have been developed can accurately calculate the optimal kernel parameters of support vector machines, there are some problems in application. For example, the parameter optimization process of the genetic algorithm needs to go through generations of evolutionary calculus to determine the optimal parameters, so these methods still require a high amount of training for the support vector machine. e grid search is one of the most direct kernel parameter optimization methods. Its fundamental theory is to divide the parameters to be searched into several grids within a specific range and find the optimal parameters by traversing all the points in the grid. is method can find the optimal global solution when the optimization interval is large enough, and the step size is small enough. At the same time, the grid search method is easy to implement and easy to use. erefore, this paper selects the radial basis function as the kernel function of the support vector machine and uses the grid search to determine the kernel parameters. e specific process is given below.
For the penalty factor C and kernel function parameter g that need to be determined, all possible values of C and g are used as the range of grid search, and the grid of values of C and g is discretized. en, with fixed step size, the grid is Computational Intelligence and Neuroscience generated along the different growth directions of the two parameters C and g, which are represented by nodes in the grid. First, choose a rough search in an extensive range, and then finely search around the optimal value. Using the crossvalidation method, the training data is divided into n subsets of the same size, and the n − 1 subsets are used as training samples to obtain a decision function, which is used to predict the subset that has not participated in the training.
is cycle is repeated n times until all subsets are predicted as test samples. Take the average accuracy obtained from n predictions as the final accuracy, as shown in Figure 4. Studies have shown that exponentially growing grids are a reasonable and efficient search method.
Case Study
According to the above analysis of the support vector machine, since the influence of the drilling fluid treatment agent on the performance of drilling fluid is multifaceted, the performance data of the three treatment agents added to the drilling fluid were measured in the laboratory. Using the data based on support vector machine, a calculation model of a multifactor nonlinear problem is established based on the requirements of drilling fluid performance. Using this model, the drilling fluid formula that meets the requirements can be quickly calculated.
In this paper, the radial basis function is selected as the kernel function, vb.net is used to design the program, and the grid search algorithm is used to realize the optimization of model parameters, to establish a model for predicting the dosage of drilling fluid treatment agent based on support vector machine.
Taking the commonly used strong inhibitory water-based drilling fluid in an oilfield as an example, the formula is 4% bentonite + 0.2% Na 2 CO 3 + 1%KOH + 2%SMP-2 + 2% SPNH + coating agent + fluid loss agent + 0.3% CaO + inhibitor + 0.5%CMC-LV + 5%PHT + 1%liquid lubricant + barite. ree key treatment agents were selected as the investigation objects, namely inhibitors KCI, fluid loss reducers JT888, and coating agents IND10. e added amount of each treatment was used as the input, and a support vector machine model with AV, PV, FL API , and R as the output was established, respectively. Its structure is shown in Figure 5.
rough experiments, AV, PV, FL API , and R of drilling fluids of 50 groups of the above 3 treatment agents were measured in different dosages and combinations. Forty groups of data were randomly selected as SVM model training samples, and the remaining 10 groups of data were used as model test samples. Experimental data are listed in Table 2.
Computational Intelligence and Neuroscience
Use the remaining 10 groups of experimental data to check the predictive ability of the model, and the mean squared error (MSE) is commonly used in the support vector machine to measure the predictive accuracy of the training gained model, and the MSE calculation formula is (19). e smaller the value of MSE, the better the accuracy of the prediction model in describing the experimental data. Table 3 compares the prediction results of the model with the experimental results. From Table 3, it can be seen that the model established by the support vector machine to predict the performance parameters of the drilling fluid has high prediction accuracy and can meet the requirements of drilling fluid design. It can be used to build the subsequent drilling fluid formulation optimization design model.
On the basis of obtaining the SVM prediction model of drilling fluid performance parameters, this prediction model is used as a part of the model for inversion of the treatment agent dosage in the entire drilling fluid formula, and the drilling fluid performance required in different situations is used as the target parameter. e dosages of KCI, JT888, and IND10 are calculated by inputting the control variables into the prediction model. If the error between the output results of the prediction model and the target parameters is within the allowable range, it is considered that the dosages of the three treatment agents at this time can meet the performance requirements of the drilling fluid and output the result of adding this group. e computational structure model is shown in Figure 6.
A calculation example is as follows.
Drilling Fluid Formulation Design.
Under the drilling fluid formulation optimization design model, the AV, PV, FL API , and R of the drilling fluid (40 mPa * s, 37.0 mPa * s, 4.2 mL, and 85.0%, respectively) are treated as the target performance parameters of this drilling fluid. e commonly used dosages of KCl, JT888, and IND10 are 0-20.0%, 0-2.0%, and 0-2.0%, respectively, which are the trial calculation ranges, and this model is used for calculation. If the errors of the calculated AV, PV, FL API R, and the target performance parameters are within 5%, 5%, 3%, and 5%, respectively, the requirements of the target performance parameters are met. At the same time, the amount of treatment agent reversed by the model is output. Under the given calculation step, the model calculates a total of 9238 sets of data. Excluding some formulas with excessive addition, the formulas that meet the error range are shown in Table 4.
Experimental Verification Model.
Although the support vector machine has good generalization ability, it can be seen from the error data of the previous prediction model establishment and test that its prediction accuracy also has a certain deviation, so the experimental verification is carried out on the reversely deduced treatment agent dosage formula. e experimental results are shown in Table 5.
It can be seen from the above chart that under the SVM model, a target drilling fluid performance may obtain a variety of drilling fluid formulations that meet the requirements, of which groups 1 and 3 are the preferred formulations, and their SVM calculation results are similar to the experimental results. However, there may also be unqualified treatment agent dosages. As can be seen in Figure 6, the AV and PV of groups 5 of treatment agents have a large gap with the target parameters after experimental verification, and they are unsatisfactory formulas.
Optimal result Computational Intelligence and Neuroscience 9
Conclusions
In order to improve the quality of drilling fluid design, using computer to assist the design and introducing artificial intelligence system into the design is a common method to solve these shortcomings in the traditional drilling fluid design. At the same time, with the rapid development of oil and gas exploration and development technology and the increasing demand, modern drilling technology has put forward newer and higher requirements for drilling fluid, and various new drilling fluid technologies have been applied and developed. Today, in pursuit of high efficiency and low cost, intelligent drilling fluid design and management technology has also received more attention. erefore, it is necessary to develop more practical software for modern drilling fluid design and drilling fluid data management. is paper introduces the basic theory of support vector machine and the principle of regression classification in detail, and analyzes and explains the two difficult problems of support vector machine kernel function selection and kernel parameter determination. Finally, the method of SVM applied to drilling fluid formulation design is studied, and a SVM model for predicting drilling fluid formulation is constructed, and it is verified by experiments that the model has good prediction accuracy.
Data Availability e dataset can be obtained from the corresponsing author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 6,244 | 2022-10-04T00:00:00.000 | [
"Engineering",
"Computer Science",
"Chemistry"
] |
Availability Improvement of Layer 2 Seamless Networks Using OpenFlow
The network robustness and reliability are strongly influenced by the implementation of redundancy and its ability of reacting to changes. In situations where packet loss or maximum latency requirements are critical, replication of resources and information may become the optimal technique. To this end, the IEC 62439-3 Parallel Redundancy Protocol (PRP) provides seamless recovery in layer 2 networks by delegating the redundancy management to the end-nodes. In this paper, we present a combination of the Software-Defined Networking (SDN) approach and PRP topologies to establish a higher level of redundancy and thereby, through several active paths provisioned via the OpenFlow protocol, the global reliability is increased, as well as data flows are managed efficiently. Hence, the experiments with multiple failure scenarios, which have been run over the Mininet network emulator, show the improvement in the availability and responsiveness over other traditional technologies based on a single active path.
Introduction
The design of a network requires a robustness study, which is closely related to the use of techniques that minimize service downtime, frame losses, delay, jitter, and, in general, network vulnerabilities that jeopardize the stability of systems. As a rule, network reliability and resilience are improved avoiding single points of failure, and to this end redundancy is one of the widely used methods for preventing the disruption of the normal operation of the infrastructure.
Consequently, the network reliability improvement may enable new applications; especially those critical use cases that require minimum latency and loss of information. In this sense, we highlight the following environments.
(i) Automation systems: (a) Smart Grid. The utility industry and substation automation applications have to accomplish the critical mission of providing power supply in transmission and distribution grids. In accordance with the Guidelines for Smart Grid Cyber Security [1] issued by the National Institute of Standards and Technology (NIST): "although the time latency associated with availability can vary, it is generally considered the most critical security requirement. " (b) Industrial Control. It is present in factory automation, process industry and motion control.
(ii) Transportation: reliable solutions are being implemented in different sectors, such as traffic control systems, vehicular networks, or avionics.
(iii) Audio/video: transmission of events whose streaming requires low latency without frame losses.
(iv) Data center: these infrastructures are inherently redundant, establishing multiple paths between hosts.
(v) Access and transport networks: they include mechanisms of protection to maintain the performance, minimizing service interruptions and fulfilling Service Level Agreements (SLAs).
In particular, we focus on the reliability of Ethernet Local Area Networks (LANs) and present an overview of layer 2 redundancy protocols, showing the active redundancy 2 The Scientific World Journal approach as a necessary strategy to provide zero recovery time. The fact is that, despite new insights have expanded the Ethernet standards to be redundant, there are not many options to provide zero loss performance in LANs; among them we focus on a solution recently standardized by the International Electrotechnical Commission (IEC): the Parallel Redundancy Protocol (PRP) that, based on the duplication of data and resources, enables a seamless communication in single-failure scenarios.
In this study, instead of using PRP in conjunction with common spanning tree technologies, we propose to combine the OpenFlow and PRP protocols for implementing a further active redundancy, achieving zero recovery time in case of multiple simultaneous failures; for which we rely on the capabilities of PRP nodes, along with flow-oriented control and flexibility features of OpenFlow. Moreover, taking into account that resilience is closely related to the network dynamicity, we describe the potential of the Software-Defined Networking (SDN) paradigm, in which "the control and data planes are decoupled, network intelligence and state are logically centralized, and the underlying network is abstracted from the applications" [2], to achieve a better utilization of available resources in situations of active redundancy for facilitating its management and increasing responsiveness, also considering the challenges of a centralized approach.
In summary, the aim of this proposal is to ensure a high availability while improving the efficiency and effectiveness of PRP networks, which may serve as an enabling technology for the development of, for example, emerging industrial wireless networks based on PRP solutions [3].
The rest of this paper is organized as follows. Section 2 analyzes the relationship between redundancy and availability, and some traditional layer 2 redundancy protocols; Section 3 provides an overview of the IEC 62439-3 specification, outlining the capabilities of PRP; Section 4 describes how the SDN paradigm may be incorporated in redundant networks and presents our proposal, whose results are shown in Section 5; Section 6 contains related and future work. Finally, in Section 7 we present the conclusions.
Redundancy and Availability
In this section, we outline different redundancy methods in connection with the availability of resources. Subsequently, several layer 2 redundancy techniques are summarized.
Types of Redundancy.
Redundancy takes two forms: temporal and spatial, while the first form replicates the information over time in a distributed manner, in the spatial redundancy the components or data in a network are replicated, which is the object of study. Conventionally, two types of redundancy are distinguished.
(i) Standby redundancy: through passive resources, these redundant networks switch from an active to a secondary network connection. We can distinguish between partial, which only overcomes the failed link/node, and global recovery, where the whole path is reconfigured (Figure 1(a) exemplifies these principles). In any case, two different schemes can be considered: (a) Protection is schemes where the standby paths are precomputed in a proactive way. (b) Restoration is mechanisms that define recovery network elements reactively in the face of failures and changes in the network.
Both approaches result in a certain communication downtime, but the protection typically incurs in a lower recovery delay than the restoration approach.
(ii) Active or parallel redundancy: multiple copies of the same data are transmitted along multiple paths simultaneously. The routes can be link-disjoint or node-disjoint for tolerance to link and node failures, respectively. The receiver expects incoming traffic on different routes, so it always receives the information transmitted as long as all paths do not fail simultaneously. This approach eliminates any downtime and ensures that no data are lost due to a single failure.
As can be drawn from the different redundant mechanisms, one of the main differences lies in the switchover time, in which time can be divided into (1) Detection time based on monitoring the communication paths in order to detect failures. Common to all types.
(2) Provision time: if any failure is detected, the network control plane must calculate an alternative path. It only affects the restoration case.
(3) Switching time to the alternative path and the subsequent communication reestablishment. It generally does not influence in the case of parallel redundancy.
In the design of a network, different redundant methods must be determined on a risk versus reward tradeoff, assessing the need to reduce recovery times and the number of redundant paths compared to other factors, such as management and deployment costs. Obviously, the use of concurrent paths also implies an increase of resources and parallelism management, so they are oriented toward critical use cases as mentioned in Section 1.
Availability Calculation.
The availability of the communication connection is essential, however it never can be totally guaranteed. It is defined in ITU-T Recommendation E.800 as follows: "availability of an item to be in a state to perform a required function at a given instant of time or at any instant of time within a given time interval, assuming that the external resources, if required, are provided"; whereas ITU-T Recommendation Y.1563 assesses performance parameters for the specific case of Ethernet service availability. In [4], the authors illustrate an exhaustive analysis of network availability and different recovery methods, applied to several technologies. From a general view, the availability of the network ( ) can be quantitatively defined by the parameters known as Mean Time Between Failures (MTBF) and Mean Time To Repair (MTTR) with the expression (1). As can be understood, resilience and redundancy are closely related through fault detection and isolation techniques, which are covered by the Operation, Administration and Management (OAM) tools. Depending on the fault nature, the failover period may be reduced and, therefore, the availability of a network is improved. This may be achieved by automating the prevention and detection of certain faults, while on occasions, the operators diagnosis and decision-making will be totally necessary. Consider Through an availability model based on Reliability Block Diagram (RBD), we can crudely estimate the impact of parallel redundancy on the overall network availability. Because the availability of a single path ( ) is defined by summation of the individual availabilities of the network equipments and transmission links, the availability of parallel systems ( ) may be calculated per the equation below: The network availability is improved in accordance with redundancy scheme, as shown in Figure 1(b), where up to four parallel paths are compared if we assume that the availability of each connection path is the same.
Traditional Layer 2 Redundancy Technologies.
Here we list some of the most relevant approaches to provide redundancy in layer 2 networks. A more extensive analysis can be found in [5].
Spanning Tree Approach. Given that in IEEE 802.3
Ethernet standards there is no mechanism to discard duplicate frames or time-to-live field, loops should be avoided. To that end, the more widely used protocols are based on the spanning tree approach that, employing a distributed 4 The Scientific World Journal algorithm, disables redundant network links for obtaining loop-free topologies. In a failure event, one or more disabled links are reactivated. Under this umbrella, we highlight some standardized protocols, such as the Rapid Spanning Tree Protocol (RSTP, IEEE 802.1D), which upgrades the original STP standard, and Multiple Spanning Tree Protocol (MSTP, IEEE 802.1s). In any case, the resulting topologies do not take advantage of all physical redundant links and, therefore, data do not necessarily follow the shortest path, without achieving the optimal delay, which is considered useful for time-sensitive services. Moreover, a common shortcoming of these protocols is that they do not guarantee a deterministic failover behavior. For example, the RSTP fault recovery time depends on the configuration parameters and the location of the fault; in [6] a method to calculate the maximum recovery time in a ring configuration is provided.
Link
Aggregation. These protocols (e.g., IEEE 802.1ax) can be considered as redundant techniques since they allow us to associate several ports to a single logical interface. This enable load balancing, reducing the failover time.
Shortest-Path Protocols.
Recently, several alternatives to Spanning Tree-based protocols have been developed. We highlight Transparent Interconnection of Lots of Links (TRILL, IETF RFC 6325) and IEEE 802.1aq. Both compute shortest paths avoiding loops and do load balancing through multiples paths. However, they are not conceived as active redundancy protocols. In the specific case of TRILL, [7] shows that it results in "an enhanced alternative to RSTP"; however it "is still unable to meet the required convergence time claimed by the Smart Grid requirements. " Additionally, there are recent attempts [8] to build active protection paths in TRILL networks so that "when a link on the primary distribution tree fails, the preinstalled backup forwarding table will be utilized without waiting for the reconvergence, which minimizes the service disruption. "
IEC 62439-3 Parallel Redundancy Protocol
Ethernet technology is being selected for many critical projects which demand dependable communication infrastructures that meet stringent reliability requirements. Nevertheless, technologies such as those mentioned above may not be valid in terms of recovery times. Therefore, this section presents a representative case in which it is necessary a minimal failover time, describing the IEC 62439-3 PRP as a mechanism for achieving it.
IEC 62439 and IEC 61850.
The IEC 62439 standard suite includes a set of redundancy control protocols for industrial automation. These protocols are oriented to support different topologies and recovery times; from which only two options (PRP and High-Availability Seamless Redundancy, HSR), defined in IEC 62439-3 [9], are able to provide bumpless redundancy in case of any single network failure. To achieve this, IEC 62439-3 proposes that the devices be connected by active redundant links; the specific PRP operation mode and the Ethernet frame specification will be described below.
PRP and HSR can be useful in many applications where high availability and low latency are required. But the most important use case is related to its adoption by the IEC 61850 specification, one of the most widely accepted standards for power system communication. This standard defines, inter alia, the requirements to be met by the network responsible for providing the connectivity service in power automation systems, which is based on Ethernet LANs. The IEC 61850 Edition 2 standard (2011) introduces more demanding applications than its first edition (2004), whose maximum transfer time for different messages types are defined in IEC 61850-5, tolerating a maximum recovery time in the order of 4 ms for the most stringent data requirements. For example, [10] focuses on substation automation systems and it studies the performance of time synchronization services in RSTP and PRP networks where multiple network failures are simulated and, obviously, PRP is much more tolerant to such failures.
Additionally, it is necessary to note that although upper layers can include resilient methods responsible for the detection of duplicates and error recovery (e.g., TCP), the achieved recovery time could not be sufficient to provide minimal latencies. Therefore, IEC 61850 introduces different services (defined by IEC 61850-8-1 and IEC 61850-9-2) that, using multicast service, are mapped onto the Ethernet link layer for functions that need to transmit time-critical data.
PRP Operation Process.
In PRP, specified by the IEC 62439-3 Clause 4, each device is connected in parallel to two LANs. PRP is fully implemented in the end-nodes, called Double Attached Node (DAN), so that network switches are protocol-agnostic, and even PRP specification is independent of intrinsic redundancy used in the LANs. Consequently, two networks can differ in topology, delay and performance. Only the requirements imposed on the networks "are having no connection between them, as they are assumed to be failindependent and having an identical MAC-LLC level" [9]. In this sense, a PRP end-device has the same MAC address in both interfaces. Another specific requirement is that the switches have to allow oversized frames, since DANs extend the Ethernet header, meaning that oversized frames (with length of up to 1532 bytes) can take place.
Regarding flexibility and compatibility with off-the-shelf devices (Single Attached Node, SAN), they can be attached directly to one network without having to be aware of PRP. Additionally, the standard specifies how to use PRP proxies, called Redundancy Box (RedBox), to which SANs can be connected redundantly to both networks (denominating them as Virtual DAN).
With respect to the operation process, a DAN implements a Link Redundancy Entity (LRE), responsible for managing the redundancy and duplicates transparently to the upper layers. This is done as follows.
(1) When the Entity receives a message from upper layers, it creates two frames by adding the so-called Redundancy Control Trailer (RCT) and calculating a new checksum. (2) The Entity sends out the frames through its both ports at the same time. These two frames traverse the two independent networks.
(3) At the destination node, the LRE has two operation modes to handle the received frames.
(a) Duplicate Accept ("for testing purpose" [9]) or Duplicate Discard: the latter, which is the most common mode, ensures that the upper layer receives only the first data frame. For this purpose, the LRE must maintain a buffer of the first received frames to recognize and discard duplicates. The buffer implementation affects the algorithm to detect duplicates (not specified by the standard); for instance, decisions about timeouts and buffer sizes must be consistent with network performance goals. (b) In both cases, the LRE removes the RCT and forwards the received frame to its upper layers.
In case that the duplicates are not discarded, upper protocols, such as IP and TCP protocols, can tolerate receiving and removing duplicates.
In addition, PRP provides a mechanism for the network supervision, so that each DAN monitors the status of each LAN and other PRP devices. This facilitates the control of network errors, as well as discovering other DAN. To this end, multicast frames, identified by a specific Ethertype (0x88FB), are used. Figure 2 shows a schematic PRP diagram and the frame format, since, to enable detection of duplicated frames, PRP nodes add the RCT, 6 bytes long, structured as follows. In summary, PRP is a simpler technique and more easily implementable than other approaches such as [11], which proposes a multiple path Ethernet scheme, along with a congestion control and packet retransmission mechanisms in order to be able of transmit data through parallel paths in a reliable manner.
Comparison with HSR.
High-Availability Seamless Redundancy (HSR, IEC 62439-3 Clause 5) can be considered a special version of PRP applied to certain topologies. In contrast to PRP, HSR requires only an additional path between two nodes; on that ground, HSR is typically used in ring topologies. Therefore, in an HSR network, the end-nodes are connected with each other, without needing an external intermediary. To this effect, each HSR device incorporates a bridge function that forwards frames from port to port. These differences make each one of them more appropriate in certain use cases, so we summarize the pros and cons of each protocol.
(i) While the PRP scheme depends on the network elements and supports two independent LANs of any topology, HSR is limited to ring-based topologies. This is very relevant for the development of our 6 The Scientific World Journal approach. Reference [12] describes different robust topologies that employ PRP and HSR jointly.
(ii) One limitation of PRP is that it is not strictly deterministic, since communication delays may vary depending on the topologies. In contrast, HSR facilitates calculating latencies since it is only necessary to know the number of nodes and their corresponding switching time. Although ring topologies also present inherent limitations, such as the maximum number of hops that does not cause the maximum latency is exceeded.
(iii) While PRP means a duplication of network equipment, HSR does not suppose this overhead, making it less expensive to deploy and maintain than PRP.
(iv) The latter implies that HSR works without dedicated Ethernet switches. By contrast, HSR nodes must implement switching function between their two ports. Accordingly, HSR should be implemented in hardware to meet acceptable time requirements; on the contrary, the PRP nodes can implement the LRE in software, which is also important for our purposes (this is not so in the case of RedBoxes).
(v) These requirements are related to the flexibility to accommodate standard nodes: unlike PRP, SAN cannot be inserted into HSR topologies.
Regarding other IEC 62439 redundancy protocols, a further detailed analysis can be found in [13], where the authors compare the different specifications. Among them, we also highlight the IEC 62439-4 Cross-Network Redundancy Protocol (CRP) and the IEC 62439-5 Beacon Redundancy Protocol (BRP), which do not provide a seamless communication since they implement a standby redundancy but, however, they allow us to establish cross links between parallel LANs, which can be considered a limitation of PRP, which can be overcome easily by using our scheme, as described in Section 4.2.
Proposed Architecture
The following is a description of the OpenFlow protocol upon which our approach is based, as detailed subsequently.
OpenFlow-Based Control Plane.
One of the most significant SDN technologies is OpenFlow, which is promoted, standardized, and supported by the Open Networking Foundation [2]. By means of the OpenFlow protocol, a controller can access and define the switch data path, adding, updating and deleting flow entries in forwarding tables. These tables contain multiple match fields (ingress port, metadata and packet headers), priority and actions associated with each flow entry [14]. The establishment of forwarding rules can be performed in a reactive mode, in which the controller dynamically inserts entries in response to switches requests; or through a proactive controller that prepopulates the flow tables statically, which is required by time-sensitive scenarios.
Although the OpenFlow centralized architecture may pose scalability issues and it seems to contradict with features required in critical time-sensitive environments, since version 1.2, OpenFlow allows switches to set backup multiple controllers or balance the load among them, avoiding single points of failure. Reference [15] studies the use of a distributed control plane and the optimal placement of controllers to achieve a better failure tolerant in Wide Area Networks (WANs). Likewise, article [16] simulates the latencies between switches and a different number and locations of controllers to find the appropriate recovery process after link failures, proposing a robust architecture against disasters.
Additionally, starting on version 1.1, the OpenFlow tables support a fast failover group entry [14] to accelerate the detection and fault recovery by acting directly on the OpenFlow switches without interacting with the controller.
In summary, as concluded in [17], "a centralized control also has advantages regarding network recovery. In a distributed network, recovering from a broken path can be a slow process. However, an OpenFlow controller is networkaware and it can find the new path faster. " With regard to opportunities, recent projects have studied the use of OpenFlow in redundant topologies. Particularly, we highlight two projects aimed at increasing the resilience where the duplication management function is relegated to the end-nodes, as occurs in PRP: (i) OLiMPS [18]: the Multipath TCP (MPTCP) technology enables, among other features, to do load balancing by using multiple paths between an end-to-end connection. Although MPTCP does not support the simultaneous transmission of the same information in different paths, it is notable the shown interest for using OpenFlow in the computation and provision of multiple link disjoint paths. Moreover, this paper studies the architecture reactivity when some links go down.
(ii) OpenRoads [19]: where OpenFlow is used to establish fully active redundancy and, consequently, improving mobility services. It is based on the fact that the nodes are able to maintain a redundant communication by different wireless technologies (WiMAX and WiFi).
However, it is noteworthy that multipath approaches oriented to centralized load balancing are not the focus of this study because, as stated in [20], "in safety critical systems, structural redundancy is typically not used to increase bandwidth, but to send redundant information over redundant paths, " which is the focus of our proposal.
OpenFlow as a Mechanism for Improving PRP Performance.
Here we describe an architecture that, based on the awareness of network configuration and traffic load, combines OpenFlow and PRP with the aim of creating multiple paths for pertinent data flows to achieve a better reliability. Therefore, unlike traditional deployments, in this approach the network control plane is not agnostic about PRP nodes.
The Scientific World Journal 7 Specifically, for establishing more than a single path between two PRP nodes, we distinguish two different implementations: (i) DAN-based operation mode: in which the new redundant paths are set in one or more LANs. Consequently, a PRP node receives more than once the same frame through each LAN, which is consistent with the IEC 62439-3 standard since the duplicated frames within 400 ms must be handled by the LRE.
(a) While a common deployment, where PRP nodes are connected through two spanning tree-based networks, is impaired when a second failure occurs in one LAN during the recovery period of the another one, in our proposal the performance is not degraded (without considering both failures in the node access links). (b) Moreover, this design enables the establishment of cross-links between each path, which is not possible in traditional PRP deployments because, as stated in [21], LAN A and LAN B cannot connect to each other "since both frames have the same MAC address, the switches would constantly change their address table and this might lead to unstable network conditions. " As mentioned before, CRP and BRP provide crossredundancy, but they do not take advantage of different paths simultaneously.
(ii) SAN-based operation mode: it allows network designers to increase notably the availability without having to deploy a complete redundant system. Hence, in this operation mode, a PRP node is connected to a single LAN, in which an OpenFlow controller configures more than one path for the transmission of the same content. Thus, the PRP device still receives duplicated frames through its only single interface, having to discard duplicates. This new operation mode reduces redundant resources (cabling and hardware) and, therefore, its CAPEX (Capital Expenditure) and OPEX (Operational Expenditure).
In both cases, the OpenFlow controller pushes flow entries that replicate certain unicast traffic along predetermined multiple paths towards the destination. The first mode means an increase in the availability compared to a common PRP deployment based on, for example, RSTP, whereas the latter is not as robust as the first one since nodes are not doubled attached, but it involves an improvement over a traditional layer 2 LAN with non-PRP compliant nodes.
Regarding the implementation, we have chosen the Floodlight controller [22], which provides a fully functional control plane responsible for tasks such as topology and device discovery, path computing and loops prevention. As shown in Figure 3, the proposed development forms a Network Operating System (NOS) where we can identify different modules that, through a Northbound API based on JSON Representational State Transfer (REST), allow us to design multiple applications running on NOS to take effect on forwarding path of switches. Consequently, through a central interface, the control plane may receive diverse information, such as commands or status alerts, which may be translated into flow entries that are automatically populated in the deployed OpenFlow switches. Then it instantly varies the behavior of a network, being able to allocate resources to different type of traffic, exposing more paths to increase reliability, etcetera. Figure 4 illustrates the process flow for identifying and forwarding data traffic.
Other Advantages.
Although our main objective is to increase the availability, the proposed scheme also improves the network utilization through a network global view and dynamic actions, processing the data path and managing resources efficiently. We have implemented the following functions.
Shortest Path Forwarding.
In order to meet timing requirements, latency analysis is a required procedure. Reference [23] studies the components of delay in IEC 61850 networks and it compares the advantages of shortest paths over spanning trees. With our scheme, the fact of using certain OpenFlow controller capabilities represents, by default, an advantage with respect to spanning tree deployments because this allows to compute and set the flow entries that form the shortest path in each LAN. In our case, the Floodlight controller is aware of the network topology through topology discovery services (Figure 3) based on the Link Layer Discovery Protocol (LLDP, IEEE 802.1AB) and we use the Forwarding Floodlight module that is able to perform unicast traffic forwarding along the shortest path in mesh topologies, which is not possible with the Learning Switch module [22].
DAN/SAN Awareness.
In spite of the fact that, by default, multicast PRP supervision frames flood the LANs and reach all devices, they are only interpreted by PRP nodes. Although "a RedBox should be configured to stop the transfer of the supervision frames to the SAN devices, so there is no supervision frame flooding to the SANs" [24], this is not required for switches to which the SAN nodes are directly connected. Consequently, this traffic is received by them, in addition to loading the network needlessly. For the purpose of reducing such traffic, we propose a supervision frames filtering that uses the Device Manager and Firewall Floodlight modules ( Figure 3); so that, periodically, the control plane learns about devices, taking information about MAC and IP addresses, as well as their attachment points to networks, being aware of nodes that are SAN or DAN. Accordingly, it denies the mentioned multicast frames at egress ports/switches. Therefore, the platform ensures that the supervision flows, which are determined by the Ethertype (0x88FB) and a unique multicast address in the same network, only reaches devices with two interfaces; minimizing the amount of global traffic. Figure 3: Proposed architecture: NOS and applications.
Critical and Noncritical
Traffic. PRP compliant devices duplicate all packets regardless of their priorities, which entails that the available network bandwidth is halved. However, this may be inefficient to meet requirements of the applications in many aspects, such as scalability. Consequently, it could be interesting to filter noncritical traffic in order to free resources. For this purpose, the implementation is able to filter TCP/IP traffic, which can be performed in two different modes.
(i) Data blocking: it prevents the noncritical traffic propagation in one of the LANs, which is performed by using Firewall module ( Figure 3). Therefore, this is in accordance with the suggested in [20]: "while critical messages are sent in both directions, it is sufficient for noncritical messages to be sent in one direction only. " (ii) Data rate limitation: our platform allows network designers to distinguish flow types by establishing traffic shaping policies in a centralized way. In particular, we use the Quality of Service (QoS) module, published in [25], that makes possible to push specific traffic to different queues (OpenFlow supports QoS [14] by setting the network Type of Service (TOS) bits and enqueuing packet), which must be previously created and configured on the particular switch. In our case, because the implementation is performed with Open vSwitch (software switch that supports OpenFlow [26]), the resources are provisioned via the Open vSwitch Database Management Protocol (OVSDB, standardized in [27]) with ovsdb-client and ovsdb-server. As a result, OpenFlow actions are populated together with QoS policies, take into consideration priority requirements. into the control plane, allows the controller to act dynamically. Among the tools for traffic passive monitoring, we use the OpenFlow protocol itself, which allows the controller to retrieve counters per flows, ports and queues from controlled switches [14]. For example, paper [28] uses this feature for implementing network monitoring tasks and to estimate a traffic matrix, namely the traffic volume from every ingress point to every egress point, for traffic engineering purposes.
Responsiveness and Resources
Our development is aware of the status of network resources, such as the throughput, detecting network stress events during which delays may increase. In particular, the proposed scheme makes use of flow statistics to adapt the network to the instantaneous needs, including the following actions.
(i) Enabling and disabling redundant paths according to the resource utilization, thereby, the platform allows us to mark thresholds for triggering different Open-Flow actions; specifically, a monitoring application receive per-flow meters via the Floodlight REST API and when a threshold is exceeded, the multiple paths are flushed of redundant data and vice versa.
(ii) Supervision frame rate checking which is possible because the monitoring application retrieves counters about multicast frames and it checks that they are in line with the expected rates.
On the other hand, disasters and large-scale events may cause multiple points of failure, of which is difficult to determine a complete knowledge in situations where the redundancy control protocol is delegated to the end-nodes, such as defined by IEC 62439-3. As we described previously, PRP allows to control the network integrity and to detect errors. Nevertheless, the provided mechanisms only are measured in a counter, which is typically accessible via the Simple Network Management Protocol (SNMP, IETF RFC 1157). This methodology may be complemented by using OpenFlow, which can greatly improve the response performance in disasters, with the global view provided by the controller. This involves the integration of disaster management systems with the control plane, identifying nonaffected routes and where the resource management system must take into account not only the availability of resources and policy, but also the QoS requirements of the application. This approach is clearly reflected in [29] where different disaster metrics are detected, evaluated and corrected by an OpenFlow controller.
Performance Validation
In this section we present the emulation tests and results that serve to evaluate the capabilities of the proposal.
Emulation Conditions.
In order to build OpenFlow scenarios, the Floodlight controller interacts with Mininet [30], the most widespread tool for emulating SDN-based networks, that "creates virtual networks, running real kernel, on a single machine. " Additionally, with the aim of sending and receiving redundant frames, we use the PRP Stack software, published in [31], so that each emulated host over Mininet supports PRP, and therefore, it is able to detect and discard duplicate frames, as well as send supervision multicast messages. Thus, each one has two network interfaces which are virtualized in a single PRP interface. These devices send a PRP supervision message every two seconds.
The results presented here correspond to an out-of-band control-plane configuration in which Mininet uses Open vSwitch to create a set of Ethernet bridges that communicate with Floodlight, which runs on another machine, through independent network resources; although Open vSwitch allows in-band configurations, where data and control planes share the same resources. Thus, failures that affect OpenFlow traffic are excluded from the analysis. For these situations, in [32] the authors implement restoration and protection mechanisms in in-band OpenFlow networks and show recovery times for data and control traffic.
Regarding controller redundancy, both Open vSwitch and Floodlight allow backup configurations, in which bridges communicate preferentially with the master controller, otherwise with the slave one. Moreover, the Open vSwitch has a fall-back mode in case of controllers falling, when it changes to standalone mode (without external controllers).
With regard to the emulation parameters, Mininet uses NetEm tool [33] for emulating various capabilities of links, this allows us to set different network conditions, such as (i) Frame loss rate: NetEm allows to emulate variable packet losses. The smallest possible nonzero value causes 1 out of 43103448 packets to be randomly dropped [33]. In addition, we have modifed Mininet to enable decimal values of rate inputs. (ii) Mininet emulated links use Linux Traffic Control to configure packet queues on interfaces, and it is possible to fix their bandwidth, which serves to characterize saturations and path performance degradation. In our case, the emulated nodes are attached by 100 Mb/s Ethernet links and switches are interconnected by 1 Gb/s links in a ring topology. (iii) We have not included any additional synthetic delay despite being possible in NetEm.
Regarding traffic flows, we use the iperf tool for generating UDP flows, which serves to obtain the recovery connection time; since, as defined in [34], test tools that offer monitoring the number of lost frames allow us to know the recovery time of a system ( ), calculated by the following expression:
=
Lost frames Offered rate .
In all experiments, iperf was configured to send packets at a rate of 50 Mb/s. As a result, a device will be continuously sending packets to determine the packet loss rate when network elements fail. Consequently, this development allows us to compare the improvement in recovery time and packet loss rate between different technologies. [35]) that does support it, from which there are no references about the improvement in the reconfiguration time.
Recovery
As mentioned before, the proposed emulation platform allows us to switch off a network element, either nodes or links. In the shown test, one of the deployed bridges is disabled and the network acts depending on its configuration. For the specific case of OpenFlow, we use the following components.
(i) When a bridge goes down, automatically Floodlight receives a PORT DOWN notification (Port Status message [14]) from Open vSwitch.
(ii) Given that the source sends data continuously, the forwarding tables of Open vSwitch maintains the previous flow entries as matchable (data reception period is smaller than the table entry timeout). In our case, to avoid this behavior, the platform loads the Port Down Reconciliation module that, based on LLDP, is responsible for "reconciling flows across a network when a port or link fails" [22]. Regarding the controller response time, it is necessary to emphasize that it is not delayed by other requests, and the mean Round-Trip Delay Time (RTT) between the Mininet machine and the Floodlight host is only 0.3596 ms, measured by ping tool.
(iii) Afterwards, the proper shortest path is recomputed and the network retains connectivity between nodes and the succeeded traffic flow can be verified. Figure 5 addresses how the different technologies behave after failure in rings of 5 hops; particularly, the worst case for spanning tree protocols is shown, namely when the root bridge fails; box plots include the interquartile range, sample median, extreme measurements and outliers (note logarithmic scaling of -axis). This test is consistent with the IEC 62439-1 Standard [6] and its results corroborate that STP convergence time is considerably slower than RSTP. Moreover, these spanning tree protocols depend highly on the number of nodes and the failure location in the ring, which is limited to 40 hops according to the IEEE 802.1D, although it is true that there are available other layer 2 protocols more suitable for this type of topology, which is detailed in [13]. The size limitation does not occur in OpenFlow networks.
Robustness of the Proposal in Multiple Failure Cases. PRP
is not only useful for critical applications where data loss is not permitted, but also it may be relevant in situations where the loss rate may be relatively high, such as analyzed in [36], in which the authors utilize PRP with two redundant wireless channels, improving the overall reliability. This may be applied to tolerate arbitrary faults, such as accidents, natural disasters, malicious attacks or blackouts. Below, we study different lossy topologies and study scenarios in order to understand the robustness provided by the establishment of multiple active paths. As a test of concept, we generated two identical and independent topologies to which the PRP nodes are connected. Furthermore, both topologies enable the creation of disjoint paths of equal cost, which reduces the possible number of cases and makes the understanding of them easier (Figure 6(a) sketches how this redundancy is organized). We perform a comparative evaluation in terms of recovery time for the following cases: (i) simple LAN where only one path is established, Figure 6 Regarding the implementation of redundancy, the Static Flow Pusher Floodlight module (Figure 3) is used to set static rules that form parallel paths. As an outcome, Table 1 show the mean global loss rate, taking into consideration the same packet loss rate per link, which is inversely related to their availability, as also considered in [36]. These results show the reduction of global loss rate is achieved by using multiple parallel links. Therefore, our scheme allows to implement different topologies with extremely low packet losses.
Related and Future Work
Regarding related work, it should be noted that it is the first time that PRP features are included in OpenFlow networks. In comparison with other possible approaches such as [7], which studies the combination of PRP and TRILL networks, practitioners to supplement it by taking into account some principles of the PRP protocol. " With respect to other layer 2 technologies, we have reviewed standards protocols that have been proposed by industry organizations such as the IEEE or IEC, without considering proprietary solutions or protocols with a limited application field, such as the ARINC 664-p7 or SAE AS6802 specifications. Both of them propose extensions to Ethernet for providing QoS and redundancy mechanisms. Reference [37] discusses the capabilities of both solutions, along with the IEEE AVB (Audio Video Bridging) protocol. In this regard, it is necessary to note the design proposal of the new generation of AVB, through the Time-Sensitive Networking Task Group, that includes IEEE P802.1CB: a recent "Project Authorization Request" to address the development of active redundancy [20].
Concerning future work, nodes implement PRP without changes in the SAN-based operation mode. However, it is clear that the protocol may be modified to reduce its overload and complexity: certain fields, for example, LAN ID, may not be necessary in this configuration mode. Otherwise, the work here presented manages unicast traffic redundancy; thus, we consider it necessary to study how the OpenFlow protocol may provide multicast traffic filtering along active redundant paths.
Furthermore, the platform works as a NOS where new capabilities can be incorporated, but always with a greater focus of meeting manageability and flexibility in the prevention, detection and response to abnormal situations, such as failures or disasters.
Conclusion
Mission-critical networking applications require redundant topologies that avoid single points of failure for improving the overall network availability. Thus, the establishment of parallel paths affects the network downtime, improving network resilience.
In particular, this paper focuses on PRP, one of the most representative standards for ensuring zero switchover time when a link or switch fails; in which, the redundancy control is responsibility of end-nodes. For the purpose of increasing The Scientific World Journal 13 the availability, this paper shows the importance of maintain multiple redundant infrastructures using OpenFlow. Our solution establishes multiple redundant paths in individual LANs, achieving a minimal disruption in case of multiple failures. Moreover, this proposal enables flexible topologies that are not possible to implement in current deployments.
Additionally, we have presented certain features that facilitate the dynamic resource management according to the network status, making the most of available resources. This shows the benefits of using a centralized external agent that provides adaptability for different services. Hence, our proposal achieves a more flexible network control, fulfilling stringent real-time requirements. | 9,616.6 | 2015-02-10T00:00:00.000 | [
"Computer Science"
] |
THE IMPACT OF ADDITIONAL CLIL EXPOSURE ON ORAL ENGLISH PRODUCTION
This study aims at testing the effectiveness of additional CLIL (Content and Language Integrated Learning) exposure on the oral production of secondary school learners of English as a Foreign Language. CLIL learners, who had received a 30% increase in exposure by means of using English as a vehicular language, were compared to mainstream English students in a story-telling task. Analyses indicated that CLIL learners’ productions were holistically perceived to exhibit better fluency, lexis and grammar while no differences were found as regards content and pronunciation. Besides, although Non-CLIL learners’ productions were larger in quantity and longer in time, CLIL learners produced denser and more fluent narrations, as attested by their higher number of different words over total number of words, of words over turn, and of utterances over turn. Additionally, CLIL learners resorted to their first language to a lesser extent and demanded fewer vocabulary clarifications. Our findings thus go along with previous research which has revealed advantages of additional CLIL exposure on oral English production.
INTRODUCTION
Many schools in Spain are currently incorporating Foreign Language (FL) teaching programmes where English is used as a vehicular language, the socalled Content and Language Integrated Learning (CLIL) programmes, to teach other school disciplines 1 . This type of exposure aims at developing both subject and language knowledge (Marsh 1994) driven by the need to accommodate European Commission requirements on multilingual education with the purpose of facilitating communication and social cohesion amongst European citizens. of language and content outcomes and ii) establish a consolidated educational approach based on CLIL tenets, namely intense exposure and real communication.
Regarding language outcomes (see Dalton-Puffer 2011; Ruiz de Zarobe 2011 for an extensive description), several comparative studies have been conducted so as to investigate how CLIL and traditional FL courses compare to each other in terms of FL proficiency achievement. Researches such as those of Sylvén (2004Sylvén ( , 2006 in Sweden, Bürgi (2007) in Swizerland, Xanthou (2007) in Cyprus, or Jimenez Catalán, Ruiz de Zarobe and Cenoz (2006), Jiménez Catalán and Ojeda (2009), Jiménez Catalán and Ruiz de Zarobe (2009) and Moreno Espinosa (2009 in Spain have pointed out to the supremacy of students enrolled in CLIL settings over traditional EFL ones as far as vocabulary is concerned. The development of morphosyntactic skills has also been explored in CLIL vs. traditional FL environments, where significantly better outcomes have been attested for CLIL groups in countries such as Austria (Ackerl 2007;Hüttner and Rieder-Bünemann 2007) and Spain (Villarreal and García Mayo 2009;Martinez Adrián and Gutierrez Mangado 2009). Studies on writing skills in Spain (Navés 2011;Ruiz de Zarobe 2010) also show positive results on the part of CLIL learners.
More relevant to the present work are those studies which have explored CLIL outcomes as regards oral skills. In fact, oral production has been acknowledged to be one of the linguistic aspects which may benefit most from the methods which foster the use of the language in meaningful contexts (Block 2003), and CLIL is undoubtedly one of these. However, some authors point out that CLIL leads to "erratic results as far as speaking is concerned" (Van de Craen et al. 2007: 71). The still few studies conducted in these lines have used diverse perspectives in the examination of oral production skills. Some studies have analysed overall oral proficiency by means of holistic methods (Lasagabaster 2008;Ruiz de Zarobe 2008), other studies have measured discourse production (Hüttner and Rieder-Bünemann 2007;Whittaker and Llinares 2009) while others have focused on pronunciation (Gallardo del Puerto et al. 2009, Rallo Fabra andJuan-Garau 2010). Lasagabaster (2008) conducted a holistic comparative analysis between CLIL and traditional FL groups of oral (and written) production in English by 198 secondary school students in the Basque Country. In this study the CLIL group outscored those subjects in the traditional programme significantly in all the variables analysed: pronunciation, vocabulary, grammar, fluency and content. Similar results were also obtained by Ruiz de Zarobe (2008) using the same instrument, though in a previous study Ruiz de Zarobe (2007) found no statistical significant differences between judges' holistic evaluations of CLIL and non CLILs' oral productions. However, it should be noted that all these studies do not specify whether extra-curricular exposure to the FL was controlled. Hüttner and Rieder-Bünemann (2007) also investigated the effect of CLIL on oral narrative competence in a comparative study with 44 secondary students in Vienna. This study examined the narrative aptitudes in an analysis of the content development in the actions depicted. This study revealed that the CLIL group outperformed their Non-CLIL counterparts, as they referred to all plot elements and textualise conceptually complex elements to a slightly greater extent. However, this study poses some limitations such as the fact that differences between groups were not supported statistically and affective variables such as motivation were not controlled. In fact, it is important to note that affective variables may play a role, as CLIL learners have been purported to show lower inhibition levels when speaking (Dalton-Puffer, Hüettner, Schindelegger and Smit 2009), to exhibit less angst in the classroom (Dalton-Puffer 2009), and to have better attitudes towards FL learning and multilingualism (Lasagabaster 2009;Lasagabaster and Sierra 2009). Whittaker and Llinares (2009) conducted some preliminary work on oral production in the CLIL classroom in first-year secondary students. Although the authors claim that their data await to be statistically analysed against a control group, they report they noticed a rise in oral fluency by the end of the year (these data are not provided, though) and comment that, as indicated by the number of words and error-free clauses, CLIL productions were as rich as those produced by traditional EFL learners in late secondary levels.
As for pronunciation, Gallardo del Puerto, Gómez Lacabex and García Lecumberri (2009) compared the degree of foreign accent of teenage students that were learning English through traditional classroom instruction with those learning in CLIL environments. Additionally, they tested the communicative effects of foreign accent, specifically the intelligibility and irritation produced by learners' accent in a narration task as perceived by a group of naïve native speakers of English. This study concluded that CLIL students' accent was judged to be more intelligible and less irritating than that of the students engaged in traditional FL lessons. However, and surprisingly, no differences in terms of degree of foreign accent itself were discovered. In the same vein, Rallo Fabra and Juan-Garau (2010) have recently conducted a study in which intelligibility and accentedness differences between CLIL and FL students were also explored longitudinally. This study analysed differences between the two groups over a year and also added a comparison to a group of English monolingual speakers matched in age. Preliminary results in a reading aloud task also read that CLIL students were more intelligible than the FL ones and that differences in accentedness were slight. Interestingly, no differences between the two testing times (1 year apart) were found in the CLIL group, indicating that one year of CLIL instruction may not be sufficient to improve aspects such as intelligibility or accentedness. The authors also suggest that, in fact, these aspects may not improve unless specific attention is driven towards them (see also Garcia Lecumberri and Gallardo del Puerto 2003; Fullana 2006).
Given that existing research has produced arbitrary results on the effects of CLIL on oral skills ( Van de Craen et al. 2007;Ruiz de Zarobe 2007) and that potential positive outcomes have been suggested to be less evident in secondary school students ( Van de Craen et al. 2007), the present study aims at exploring the oral production on the part of two secondary education groups which, having started learning English as a FL at the same age and presenting similar motivation rates, differ in the methodological approach and in amount of exposure, since one of the groups has received CLIL instruction for 3-4 years in addition to traditional FL lessons.
PARTICIPANTS
The participants in this study were 28 Basque-Spanish bilingual children attending secondary school in 3 rd and 4 th grades with a mean age: 14.6 (see table 1). Exposure to the foreign language outside school was controlled and, hence, the sample was selected by eliminating those learners who attended extra lessons or had stayed abroad in English-speaking countries. Participants received school instruction in Basque (the minority language in the community), whereas Spanish (the majority language in the Basque Country) and English (a foreign language in the community) were school subjects to which 4 and 3 hours per week were devoted respectively. All participants had started learning English when they were 8 years old. Learners were divided into two groups (CLIL group and Non-CLIL group) of 14 students each according to whether they were/not receiving extra exposure to English by means of CLIL. Both groups were made up of 10 students in their 6 th year of English learning (3 rd graders) and 4 subjects in their 7 th year of English instruction (4 th graders).
By the time of testing, the Non-CLIL group had received around 720 hours of traditional EFL teaching while the CLIL group had received this same exposure in addition to an average of 250 hours in the CLIL fashion, including subjects such as science, biology or geography/history (see table 1). The CLIL programme had been implemented in secondary school so the CLIL learners had been receiving content-based instruction in English for 3-4 years.
INSTRUMENTS
The participants were enrolled in a story-telling activity in which they were individually presented a series of wordless black and white vignettes: Frog, where are you? (Mayer 1969). Students had to look at the pictures and tell the interviewer the story in English. Productions were recorded in a digital audio tape recorder (TCD-D100) in a quiet room. Participants' productions were assessed holistically by 2 trained listeners involving the following variables: pronunciation, use of vocabulary, grammatical correctness, fluency and content development in a 1 to 10 scale (Cenoz 1991). The assessment sheets were facilitated with instructions/ guidelines which the judges could access whenever needed. The two assessors (aged 30-35) were Spanish-natives postgraduates in English Studies and experienced language judges. A second analysis was also computed so as to explore the productions quantitatively. The full outputs were transcribed and codified in Childes and a frequency count was computed for the following variables: total no. of words, total no. of words-L1/s transfer, (total number of words used in English only), total no. of different words, total no. of utterances, total no. of turns as well as for no. of different words over no. of words, no. of utterances over turn and no. of words over turn. The time the students used to narrate the complete story was also controlled for (narration time) as well as those Basque and Spanish words uttered by the student (L1/s transfer) and, finally, the number of interventions on the part of the person interviewing (interviewer turns). With regard to this variable, instructions for interviewers read that these were only to intervene should the subject explicitly ask for lexicon clarification. All these variables will be clustered in the results section in three main frameworks: variables which elucidate 'amount of production': total no. of words, total no. of words-L1/s transfer, total no. of (table 4); and variables which reveal strategies which the students may use to compensate for lack of L2 resources, namely native language transfer (L1/s transfer) and appeal for vocabulary assistance (interviewer turns) (table 5). Motivation towards the English language and the English lessons was controlled for by means of two tests which examined attitudes towards the English language in a 7-point Likert scale (Motivation test 1) and a 13-question test which tested mainly instrumental motivation or the practicality of the language for their future careers (Motivation test 2). Neither of these two variables (in percentages-% and standard deviations-SD in Table 1) reported significant differences between the CLIL and Non-CLIL groups indicating that the groups exhibited a similar and rather positive motivation rate towards the English language.
HOLISTIC ASSESSMENT
A first and necessary analysis explored the reliability of the holistic assessment by correlating the data provided by the two judges for the two groups. Moderate to high correlational indexes were found in all variables assessed (pronunciation: r(26) = .63, p < .001; use of vocabulary: r(26) = .84, p < .001; grammar correctness: r(26) = .93, p < .001; fluency: r(26) = .93, p < .001; content development: r(26) = .78, p < .001), indicating that both judges employed similar criteria in the evaluation. T-Test analyses were computed so as to establish comparisons between the means (range: 1-10) of the 5 variables assessed by the two judges in the holistic evaluation of students' oral productions according to the instruction received (CLIL and Non-CLIL). As can be seen in Table 2, the CLIL group outscored the Non-CLIL group in all variables analysed. The CLIL group significantly outscored the Non-CLIL group in grammar and fluency (t(26)=2.94, p<.05 and t(26)=2.10, p < .05 respectively), which assessed grammatical accuracy and communicative effect and continuity and speed of speech respectively. Along the same lines, CLIL superiority turned out to be marginally significant (p=0.84) in vocabulary.
However, those variables which measured density of production ( Table 4. Density of production: mean scores, standard deviation (SD) and significance.
As for compensation strategies (Table 5), it can be observed that the CLIL group used the L1/s significantly less than the Non-CLIL group (t(25) = -3.31, p < .005). It is also observed that that interviewers interacted significantly more with the Non-CLIL group, interviewer turns: (t(25) = -2.69, p < .05), an indicator that these subjects demanded lexical cues more often than the CLIL group. Table 5. Compensation strategies: mean scores, standard deviation (SD) and significance.
DISCUSSION
The present study has provided comparative data on oral output outcomes on the part of students enrolled in a CLIL programme and students engaged in traditional foreign language lessons (Non-CLIL) so as to elucidate the potential effect of additional CLIL exposure. The holistic assessment of oral production has evinced that CLIL students outscore traditional students in all variables but that this superiority is only significant in use of grammar, fluency and vocabulary. These findings go along with those studies reporting that in CLIL settings fluency and vocabulary development seem to be more benefitted areas (Dalton-Puffer, (Hüttner and Rieder-Bünemann, 2007). There are several factors which can account for the hindrance in pronunciation in these learning contexts. First, teachers are very often non-native speakers in these formal learning environments (Cenoz 2003) and we may find various levels of phonetic competence among these professionals. Second, pronunciation has been referred to as "the least useful of the basic language skills" (Quijada 1997) given that language teaching goals aim at the need to understand and be understood (intelligibility) rather than attaining a native accent (Jenkins 2000;Levis 2005). In fact, research has shown that intelligibility may not be necessarily confronted with foreign accent (Munro 2008). A third possible factor which may have contributed to the lack of advance in pronunciation is the fact that most of the English text books used in Basque secondary schools are characterised by the scarcity of exercises targeting pronunciation skills (Gallardo del Puerto 2005). Finally, a further sociolinguistic factor may be mentioned in these lines, namely the poor presence of native English in the media and entertainment given the strong present of the dubbing industry in Spanish (TV series and cartoons, films at theatres or video games). We did not observe differences in content development between the groups. These results may be related to the type of task used in this study and the administration mode. The story telling activity guided with pictures was presented to the students so that they would access the vignettes sequentially during the task. This procedure, along with the fact that the story was the same for all the subjects, may have limited the development of the plot or further development of characters or scenes. It shall be noted that for a more efficient assessment of content development skills a less guided (maybe semi-guided) task may better explore potential differences in the contextualization and character, plot and scene development. As other authors have pointed out (Hüttner and Rieder-Bünemann 2007), a further possible reason for this lack of differences in content development may be that of cognitive development, that is, the ability to extend and detail the story may progress independently of the amount of the type of instruction received.
The quantitative analysis provided in this study revealed interesting results. Unlike Whittaker and Llinares (2009), simple frequency counts showed that the Non-CLIL students produced longer outputs: more words, more different words, more utterances and more turns (Table 3). It is interesting to note that when the variable use of the L1s was controlled for, these differences were still present (Total No. of words-L1s transfer, in Table 3). A further important measurement taken, narration time, revealed that these students used significantly more time in telling the story than the CLIL group. However, when exploring the data in terms of density in production (Table 4), measured in number of different words over number of words, number of utterances over turn and number of words over turn, and in terms of compensation strategies (Table 5), measured in amount of L1 use and number of interviewer interventions, data reveal some advantages on the part of the CLIL group. First, their outputs become more compact as they use more utterances and words in each turn, data which go along with the higher fluency on the part of the CLIL group observed in the holistic assessment and which evinces that the two analyses used in the present study (holistic and quantitative) report similar findings. Secondly, those variables which aimed at exploring compensation strategies revealed that there was a significantly higher use of words and expressions in Basque and Spanish in the Non-CLIL group, as well as many more interviewer turns, in the form of, mainly, vocabulary clarifications. This may have accounted for the advantage on the part of the Non-CLIL groups in terms of 'quantity' of production observed in Table 3 but actually reveals an advantage on the part of the CLIL group in 'quality' of production, understood in this study as revealing a more fluent and denser narration as well as a better ability to limit the access to the L1/s. This last finding could evince that CLIL students either already knew the vocabulary they needed to tell the stories or showed a decrease in the use of negotiation and repair strategies which characterise foreigner talk (Gass & Varonis 1991). In other words, it might be conceded that CLIL learners do further rely on target language-based knowledge or compensation strategies, which makes them be less dependent on both the L1 and the interviewer.
CONCLUSIONS
Our study supports the findings of those investigations indicating that the CLIL approach is associated with better language outcomes (Ackerl 2007;Hüttner and Rieder-Bünemann 2007;Bürgi 2007 Sylvén 2004Sylvén , 2006Villarreal and García Mayo 2009;Xanthou 2007) and more particularly the findings of research pinpointing that oral production can be enhanced by CLIL (Hüttner and Rieder-Bünemann 2007;Lasagabaster 2008;Ruiz de Zarobe 2008;Whittaker and Llinares 2009). We have verified that additional CLIL exposure leads to better oral production in a story-telling task. More specifically, our CLIL learners have been found to display more fluent and denser speech characterized by better grammar and vocabulary, as well as lesser reliance on both the L1 and the interviewer's help, all of which make CLIL learners more efficient and independent speakers of the foreign language. However, as far as pronunciation is concerned, and in agreement with previous research (Gallardo del Puerto, Gómez Lacabex and García Lecumberri 2009; Rallo Fabra and Juan-Garau 2010), the positive effect of CLIL is not so clear, which confirms Van de Craen et al 's (2007) conclusion regarding the controversial results of CLIL in the case of oral production.
Nonetheless, the present study is not without some limitations. First, we have been unable to gather data from groups which only differ in amount of exposure, that is, groups which may have received the same amount of English hours differing in the type of instruction (CLIL vs. traditional EFL) only. This is so given that CLIL methodologies are mainly being implemented in Spain by adding exposure to traditional English lesson rather than substituting those hours by CLIL ones. As a result, the two groups analysed in the present study not only differ in type of exposure but also in amount of exposure, the CLIL group having received more instruction hours, which may be interpreted as a factor contributing to the superiority observed. Some researchers (Lasagabaster 2008;Navés 2011;Ruiz de Zarobe 2008, 2010 have tried to rule out the effect of amount of exposure by comparing students receiving additional CLIL exposure to traditional students enrolled one or two grades above. The result of these comparisons, however, seems to indicate that, in spite of a younger age, CLIL students obtain better language outcomes than traditional students. Hence, we will try to approach/address this type of comparison in future research. Alternatively, a comparison between our CLIL students and a group of amountof-exposure-matched Non-CLIL peers who have started to learn English at an earlier age than their CLIL counterparts would be addressed, if possible. A further limitation of our study relates to the nature of the instrument employed. We are aware that, having not given speakers a set narration time in the story-telling activity, measurements such as the type-token ratio provided by Childes may be less reliable when interpreting density of vocabulary used (McKee et al. 2000) as those students having taken longer to narrate the story are likely to have used a wider range of lexicon. In an attempt to control for this factor, we have provided duration of narration time as a variable and although we did not find differences in density (No. of different words/No. of words in table 4), we could see that the group using more narration time did have a higher ratio in this variable. The lack of significance in the present study may owe to the nature of the task: a picture-guided task, which could have hindered these effects as the pictures provided may have led the participants to access similar lexicon if compared to a free speech task or a story-telling task without guiding pictures. | 5,205.2 | 2013-05-29T00:00:00.000 | [
"Education",
"Linguistics"
] |
Endogeneity of the elasticities and the real exchange rate in a balance of payments constrained growth model: cross-country empirical evidence
Balance of payments constrained growth models are notable for their longevity. This is especially true for the case of Thirlwall’s Law, which defines that a country’s sustainable growth rate is given by the ratio between the income elasticity of exports and that of imports. In light of this, the current paper explores the hypothesis that the income elasticities of this type of models are endogenous. The debate on the latter is resurgent in the literature. The results provide evidence that the ratio is, indeed, exogenous, and that the level of the real exchange rate influences economic growth as it determines such ratio. In other words, the real exchange rate is important for improving non-price competitiveness without, however, making the ratio between elasticities endogenous.
Introduction
seminal paper suggests that a country»s maximum sustainable growth rate is given by the equation that defines the growth rate compatible with balance of payments equilibrium. Its equation is ! !" = # $ ⁄ = &' $ ⁄ , where # is the growth rate of exports, $ is the income elasticity of the demand for imports, & is the income elasticity of the demand for exports and ' is the growth rate of world income. This relationship came to be known as Thirlwall»s Law.
As McCombie (2011) puts it, the rationality behind this "law" is that no country can grow faster than compatible with balance of payments equilibrium for long periods of time or its foreign debt, as a proportion of GDP, would rise to such a level that would cause international confidence to plummet, a decrease in the capacity of acquiring foreign credit and a currency crisis. On the other hand, if the balance of payments equilibrium growth rate is lower than factor endowments would otherwise allow, the country is constrained to grow at a lower speed.
Through time, objections to Thirlwall»s hypotheses have arisen (for example, Blecker, 2016;Cortes e Bosch, 2015). It is true that the existing studies, both for developed and for emerging countries, suggest that "Thirlwall»s Law" cannot be rejectedor, yet, that there might be differences between the income elasticities of countries. This notwithstanding, few works suggest an endogenous relationship between the ratio of income elasticities and relative growth rates. The importance of the real exchange rate for the balance of payments constraint and for economic growth is another neglected issue.
This work intends to empirically asses the hypothesis that the elasticities of the balance of payments constrained growth model are endogenous. It furthermore intends to identify the level of the real exchange rate as one of the elements that explain growth, particularly for developing countries. In all effect, the works based on Thirlwall»s (1979) model assume the hypothesis that, in the long-term, relative prices either remain unchanged or have a negligible impact. There is, on the other hand, an emerging literature that highlights the influence of the exchange rate on growth, both directly (see, for example, Rodrik, 2008;Gala, 2007;Sampaio and Gala, 2008) and indirectlyin this case, as it determines the income elasticities (as in Missio and Jayme Jr., 2012).
Besides this introduction and the conclusions, this article is divided into three more sections. The following one discusses export-led growth models and the relevant theoretical issues. Section 3 in turn presents the model and the data we used, while the fourth one empirically analyses the relationship between the endogeneity of the elasticities and the role of the real exchange rate.
2. Growth-led models and endogenous elasticities Thirlwall (2002, p. 52) notes that "in neoclassical theory, output growth is a function of factor inputs and factor productivity with no recognition that factor inputs are endogenous, and that factor productivity growth may also be a function of the pressure of demand in an economy. In practice, labour is a derived demand, derived from the demand for output itself. Capital is a produced means of production and it is therefore as much a consequence of the growth of output as its cause. Factor productivity growth will be endogenous if there are static and dynamic returns to scale." Thirlwall»s first model shows basically that the export demand function is the most important component of autonomous demand in an open economy. The growth of exports thus governs the long-term growth of output, with the other components of demand adapting to it. It is thus assumed that where ( # is the growth rate of output over time, which is function of # # , the growth rate of exports. The export demand function is straightforward which, expressed as rates of change, becomes where . $ are domestic prices, . % are the prices of competitors measured in a common currency, / is income outside the country, 0(< 0) is the price elasticity of the demand for exports and &(> 0) is the income elasticity of the demand for exports.
The growth of the income of the rest of the world and foreign prices can be considered exogenous, but the rise of domestic prices can be considered endogenous. It is derived from a mark-up pricing equation in which prices are based on unit labour costs and a mark-up rate where 7 is the national wage rate, 8 is the average output of labour and 9 is 1 + the mark-up on unit labour costs. In rates of change, Productivity gains, in turn, partly depend on the growth of output itself. This is due to static and dynamic returns to scale, as given by Verdoorn»s Law where = (# is the autonomous growth of productivity and ? is Verdoorn»s coefficient. Verdoorn»s relation establishes the possibility of a virtuous growth cycle, led by demand. The model»s equilibrium solution is reached through the following procedure: (6) is substituted into (5), the latter»s results into (3) and, finally, this is substituted into (1). The result is The Verdoorn coefficient ( ? ) enlarges growth rate differences between economies arising from differences in other parameters and variables (which means that the higher is ?, the smaller will be the denominator, since 0 < 0). If ? = 0, the differences are not increased.
If model (7) is seen merely as an export-led model, without any Verdoorn effect feedback and with constant prices, equation (7) reduces to By imposing a balance of payments constraint, one obtains ) = 1/$, where $ is the income elasticity of the demand for imports. Thus, This result shows that the growth rate of a country relative to all the others (') is equiproportional to the ratio of the income elasticities of the demand for exports and importsas Thirlwall (1979) has shown.
Based on a similar construction, Krugman (1989) proposed inverting the causality of the models above. In his model, the growth of the labour force determines the growth of output, and the rapid increase of the latter leads to fastgrowing exports -hence the apparently higher income elasticity of the demand for exports. Causality runs, in contradistinction to Kaldorian-inspired models, from growth to the export elasticities, and not from the latter to the former. It is thus the case that the income elasticities, according to this approach, are not structural parameters, but rather variables that adjust to reach equilibrium with the ratio between the growth rates of national and world income. In other words, Krugman makes the elasticities endogenous.
As McCombie and Thirlwall (1994) also do, Krugman (1989) rejected the hypothesis that changes of the real exchange rate are an important component for keeping balance of payments equilibrium. Since prices do not adjust -for output changes in response to variations of the real exchange rate -and assuming income gaps are differences in factor endowments and productivity, the author concludes that the elasticities should adjust to income variations 1 . According to him, the explanation is that different growth rates impact trade flows in a manner that creates differences in the apparent elasticity. Elasticity is apparent, in turn, because countries do not in effect face the demand curve, but rather the demand curve supply variations bring about (Carvalho, 2007).
McCombie (2011) considers Krugman»s paper important because it discusses precisely the direction of causality. Taking into account the New Trade Theory, the monopolistic competition arising therefrom and increasing returns to scale, Krugman argued that faster growth leads to higher specialisation and the production of new goods to be sold in the world market. Therefore, high income elasticities of the demand for exports depend on supply-side dynamics and on fast growth, not the other way around.
McCombie (2011) argued that there are three problems with Krugman»s explanation. First, the degree of specialisation and the capacity of profiting from it are, at least partially, a function of the size of the economy. Second, there are many ways slow output growth can lead to a slow growth of total factor productivity. Indeed, there are a rich literature on growth models, such the export led growth using the Hicks super multiplier, the cumulative causation (Myrdal, 1957), Schumpeterian models induced investment by technical progress, learning by doing, economies of scale among others. Verdoorn»s Law supplies substantial evidence on the importance of these elements (McCombie and Thirlwall, 1994;. Finally, the third problem is that, for a developing country, it is rather unlikely that specialising in a commodity will increase its income elasticity of the demand for exports. Thirlwall (2002) considers that Krugman»s reversed causality hypothesis is a tautology. In Thirlwall»s (1979) model, wherein causality runs from the elasticities to growth, the former reflects the structure of production. This is the basic assumption of all classic centre-periphery models. Even amongst industrialised countries (Krugman»s main focus), feedback mechanisms as the ones already described (associated to Verdoorn»s Law) tend to perpetuate initial differences in income elasticities associated, on the one hand, to "inferior" industrial structures, and, on the other, to "superior" ones (Thirlwall, 2005).
In models inspired by Thirlwall (1979), the "lack of structure" of the elasticities many times involves, as in Krugman»s (1989) approach, the question of the extent to which income elasticities can be taken as exogenous (as the original models in line with Thirlwall»s suggest) or endogenous (as Krugman suggests).
Based on Thirlwall»s original approach, authors such as McCombie and Roberts (2002) and Missio and Jayme Jr. (2012) propose "solutions" for making the income elasticities endogenous without, however, inverting the direction of causality. These authors maintain the premise that a country»s sustainable growth rate is given by the product of the ratio of its income elasticities and the growth rate of world income, without the need of assuming that these elasticities are exogenous.
It should not be forgotten that, in many instances, the income elasticities of the countries are largely determined by natural resource endowments and the characteristics of the goods they produce. These are products of history, which are independent of the growth of output. An obvious example is the contrast between primary product production and industrial production: primary products tend to have an income elasticity of demand less than unity (Engel»s Law), while most industrial products have an income elasticity greater than unity (Thirlwall, 2005).
In this vein, a slightly different way of making the elasticities endogenous, which furthermore allows for a structural analysis of their changes, is expanding Thirlwall»s model to a multi-sector approach. By doing so, the sectoral composition of the country»s structure of production and its specialisation pattern makes aggregate elasticities endogenous (Silveira, 2011). This is what Araújo and Lima (2007) intend with their model. McCombie and Roberts (2002), in turn, propose a balance of payments constraint model with hysteresis in the elasticities. Specifically, the income elasticities of the demand are a non-linear function of past growth rates (a sufficient condition to break with the equilibrium conditions of the standard model).
Missio and Jayme Jr. (2012) explore the relationship between the exchange rate, structural heterogeneity and the income elasticities of the demand for exports and imports in developing economies. Their goal is to test whether a competitive real exchange rate leads to a diversification of the investment and the production of sectors that operate in the world market. The authors indicate that real exchange rate undervaluation affects an economy»s productive heterogeneity. The elasticities are endogenous in the authors» model through the level of the real exchange rate. If the latter is depreciated, this might foster research and development, given its positive impact on self-financing conditions and access to credit, thus making it possible to modernise and diversify the structure of production. In the long-term, this leads to higher export capacity and reduced dependence on imports.
Most recently Missio et al. (2017) extend the model developed by Araujo and Lima (2007) to derive a balance-of-payments equilibrium growth rate analogous to Thirlwall»s Law based on a Pasinettian multi-sector macrodynamic framework in which income elasticities are endogenous to the level of the real exchange. Furthermore, the model was built to relate growth, the real exchange rate and sectoral heterogeneity. From a cumulative causation perspective, the authors demonstrate the effect of the level of real exchange rates on the generation of technological progress, and how these rates also impact the growth of the whole economy via a balance-of-payments constrained approach. The authors show that an undervalued real exchange rate has positive effects on economic growth in developing countries.
3. Testing the hypothesis of endogenous elasticities and the role of the real exchange rate
Data sources
In order to analyse the hypothesis that the ratio of the income elasticities is endogenous, as well as the importance of the exchange rate, we must estimate income elasticities for a series of countries. We used the annual volume of exports and imports and world and domestic income. The dataset is the World Development Indicators (World Bank), and they are listed in Table 1 2 .
To find the real exchange rate we take the World Development Indicators (World Bank). We used the Consumer Price Index (CPI) for the countries listed in table 1, with the United States as the basis for all foreign price levels and 2005 as the base-year. Besides the CPI, we also collected time series of the nominal exchange rate, in local currency units per U.S. dollars. To study the endogeneity of elasticities (instrumental variables) we employ the average real exchange rate of the last 10 years for which there is data.
To look into the endogeneity of the ratio of the income elasticities we used another variable (as an instrument) to perform the Durbin-Wu-Hausman (DWH) endogeneity test. This other variable is the share of technology-intensive sectors in a country»s total exports. We estimated the "share of technology-intensive sectors in total exports" using the methodology Lall (2001) proposes. The author classifies output into primary products (PP), resource-based manufactures (RB), low technology manufactures (LT), medium technology manufactures (MT) and high technology manufactures (HT). Based on this classification, we can group MT and HT manufactures into a high technology sector (HT) and RB and LT manufactures into a low technology one (LT), leaving primary products in a category of their own.
We thus took the share of the high technology sector (MT+HT) in total exports as the instrument for the ratio of elasticities, in order to estimate the instrumental variables models. We define this variable as the average value of seven years of variations (2004 to 2010).
The share of the different sectors in a country»s total exports, according to Lall»s (2011) classification, can be found at the website of the Economic Commission for Latin America and the Caribbean, in the Interactive Graphic System of International Economics Trends section (IGSIET).
Aggregate estimation of the elasticities of foreign trade
In order to test the hypothesis that the ratio of the elasticities is endogenous, we employ the Durbin-Wu-Hausman (DWH) endogeneity test. It tests for endogeneity in a model estimated by instrumental variables. Besides, we use the Instrumental Variables to test this hypothesis, based on Cameron and Trivedi (2009). The instrumental variables estimator (IV) is consistent under the assumption of valid instruments (z). The latter are variables correlated to the regressor x, which satisfy E(F|/) = 0. The IV approach is the original and most widely used method to estimate the parameters of models with endogenous regressors.
Our research strategy is as follows. We first estimate the ratio of income elasticities of the demand for exports and imports for 38 countries. Given this, we then run a cross-country regression of the average relative growth rate of output against the previously estimated ratio.
Considering Thirlwall»s Law equilibrium equation, from the former section, we obtain the following testable model Where ! ) /' is the ratio between national income and world income and & ) /$ ) is the ratio of the income elasticities of the demand for exports and imports.
On the one hand, for the simplest version of Thilrwall»s Law in equation (10) to be valid, J must equal 1. On the other hand, for the endogeneity hypothesis to be valid, & ) /$ ) must be endogenous in this equation.
To obtain these estimates we used the error correction term (ECT), since it contributes to the parameterization of a Vector Error Correction (VEC) equation. However, the VEC parameterization was not possible for all countries, given that some residuals were auto-correlated, heteroskedastic and non-normally distributed. For these cases, we estimated the parameter using an autoregressive distributed lag model (ADL). This model, besides breaking with the hypothesis of endogenous variables, allows other parameterizations that can adjust for the problems of the residuals (in some cases). Table 1 displays the estimates of the income elasticities for several countries. The numbers in bold were estimated by the vector error correction model, while those not in bold were estimated by the ADL model. The coefficients shown in table 1 have the expected signs. They are, furthermore, significant for all countries, both for exports and imports, at 1% and 5%. Finally, they reveal that the balance of payments equilibrium growth rate (BPEGR) is close to the actual growth rate of the countries. This is the first evidence in favour of Thirlwall»s Law.
Identifying the instrumental variables
The aim of this section is to present the econometric method used to test the hypothesis that the ratio of elasticities is endogenous, as well as the instrumental variables used. When we assume that variables are endogenous, in panel or in cross-section data, we use instrumental variables. The statistical test used for this hypothesis is the Durbin-Wu-Hausman (DWH) test, as well as the traditional Hausman test, which allows for testing whether the regressor is endogenous. 3 The Durbin-Wu-Hausman 4 (DWH) test is a more robust version of the Hausman test, for it uses the device of augmented regressors (Davidson, 2000). The results show that the per capita growth rate of a country is directly related to that of its exports (or the sectoral income elasticities multiplied by the growth rate of the world economy) and inversely related to the sectoral income elasticities of the demand for imports. It should further be noted that the sectoral income elasticities of the demand for exports and imports are weighed by coefficients that measure the participation of each sector in, respectively, total exports and imports.
The major consequence of this model is that changes in the sectoral composition of the economy»s output (i.e., in the structure of production) impact its growth rate. Romero et al. (2011) estimated sectoral elasticities for Brazil, obtaining results that corroborate the multi-sector version of Thirlwall»s Law. Thus, changes in the sectoral composition of output are reflected in aggregate income elasticities. Similarly, Araújo and Lima (2007) estimated MSTL elasticities for a number of Latin American and Asian countries. They verified that more technology-intensive sectors have higher income elasticities, and that these differences are greater for exports than for imports.
These evidences show that, as industrialization deepensand, most importantly, as the share of technology-intensive sectors in the economy increases -, the elasticities of exports and imports also vary. This impacts the growth rate of output. Indeed, by making the productive structure of a country dynamic, one allows for the cumulativeness of short-term effects on the economywhich can lead to changes in the long-term patterns of the same country.
Here we therefore assume that the share of high-technology manufactures has a direct impact on the income elasticities. The reason for this is that, according to MSTL and the empirical literature (see Romero et al., 2011;Gouvea and Lima, 2010;Araújo and Lima, 2007), the greater the share of high technology manufactures in the economy, the greater will be the income elasticity of the demand for exports (which relaxes the balance of payments constraint, increasing the ratio of the elasticities). Hence, the faster will also be economic growth.
According to the same hypothesis, the impact of the share of high technology manufactures on economic growth is indirect, i.e, via the income elasticities. This can work as a powerful instrument for testing the endogeneity of the elasticities. We used the classification criterion Lall (2001) proposes for differentiating the technological content of economic sectors, grouping the medium and high technology manufactures into a single sector based on the average 2004-2010 value.
The Level of real exchange rate
The use of the level of the real exchange rate as an instrument for the ratio of the elasticities stems from the theory on the Balassa-Samuelson effect and the evidences Rodrik (2008) and Sampaio and Gala (2008) find, as well as from discussions present in Ferrari et al. (2010), Silveira (2011) and Missio and Jayme Jr. (2012). According to Rodrik (2008) and Sampaio and Gala (2008), exchange rates deviations, calculated by the Balassa-Samuelson effect, are significant in explaining economic growth.
Assuming that the Balassa-Samuelson effect is valid, and, even more, taking the level of the real exchange rate as an instrument for the ratio of the income elasticities, it can be said that the real exchange rate somehow controls for the productivity of the economy. Therefore, it has an indirect influence on economic growth through the ratio of the elasticities. In other words, the real exchange rate (and productivity) alters the ratio of the elasticities, consequently affecting economic growth 5 .
Other works also start with the same hypothesis that the real exchange rate and the ratio of elasticities are relatedas the works of Silveira (2011), Missio and Jayme Jr. (2012) and Ferrari et al. (2010) show, the exchange rate has an impact on the income elasticities. That is to say, the exchange rate might affect the income elasticities of the demand for exports and imports, either relaxing or tightening the balance of payments constraint on growth, according to Thrilwall»s equilibrium equation. Ferrari et al. (2010) investigate the basic hypothesis that managing the real exchange rate can lead to effects that transcend short-term aggregate demand adjustments. It can actually shift the elasticities, and hence alter the long-term relation between the growth rates of domestic and world output 6 .
As Silveira (2011) points out, we can conclude that McCombie and Roberts (2002) suppose that the real exchange rate has an indirect, long-term effect on the economy»s total output. The reason for this is that the exchange rate affects the short-term growth of output (even assuming PPP), thus also transforming the economy»s structure of production and, hence, the income elasticities.
Inasmuch as the exchange rate is an essential determinant of the relative prices of the economy, its variations alter the incentives for producing numerous goods. These changes foster or disarticulate various sectors and productive chains. This discussion is not exclusively related to the distribution of incentives between tradable and non-tradable sectors (Rodrik, 2008), but also, and most importantly, to incentives within tradable sectors themselves. According to the theoretical argument advanced here, the proper management of the exchange rate can redirect income to less traditional (and more transversal) sectors, thereby allowing them to develop. As a devalued exchange rate makes the prices of non-traditional sectors competitive in the international and domestic markets, these sectors get a (unique) chance of developing themselves (as an outcome of dynamic economies of scale, learning-by-doing etc.) and boosting their price -and even their non-pricecompetitiveness (considering that qualitative gains can be achieved via the same incentives) (Silveira, 2011).
Finally, the aforementioned work of Missio and Jayme Jr. (2012) explores the possibility of a relationship between the exchange rate, structural heterogeneity, and the income elasticities of the demand for exports and imports in developing economies. The authors provide evidence that an undervalued exchange rate induces a diversification of investments and products in sectors that operate in the world market.
Using instruments to test the endogeneity of elasticities
According to Cameron and Trivedi (2009), the validity of an instrument cannot be tested in a just-identified model. But it is possible to test the validity of overidentifying instruments in an overidentified model, provided the parameters of the model are estimated using optimal GMM 7 .
The starting point is the fitted value of the criterion function after optimal The fundamental hypothesis for the consistency of the OLS estimator is that the error term is not correlated to the regressor, i.e., E(F|,) = 0. If this hypothesis does not hold, then the OLS estimator cannot be interpreted as a causal effect.
For the matter at hand, the exogeneity hypothesis, a usual assumption for testing Thirlwall»s Law, is an indispensable element to find the balance of payments equilibrium growth rate. If the hypothesis is violated, as amongst others McCombie and Roberts (2002) and Missio and Jayme Jr. (2012) suggest, then Thirwall»s equilibrium might not be validor it might still be valid, but express a bi-directional relationship between the equilibrium growth rate (which reflects the ratio of the elasticities) and actual growth. In other words, if the ratio of the elasticities is exogenous (structural), Thirlwall»s (1979) simplest model holds. More sophisticated versions of the model are more appropriate, such as MSTL or a model that includes the level of the real exchange rate as a subsidiary element in balance of payments constrained growth. This is why we test whether the ratio of the elasticities is endogenous, employing the Durbin-Wu-Hausman (DWH) test, as explained above.
The DWH test initially considers that the ratio of the elasticities (&/$) "variable" is endogenous. It then tests its endogeneity by means of an instrumental variables model. It is worth mentioning that the endogeneity hypothesis imposed on the ratio of the elasticities will "only" be made to perform the DWH test.
The first stage presents the test of the instruments against the "potentially" endogenous variable. It is thus defined where & $ ⁄ is the ratio of the income elasticities, W is the share of hightechnology manufactured goods in total exports and X8 is the level of the real exchange rate.
Equation (11) defines an overidentified model with two instruments, namely, the share of high technology manufactured goods and the level of the real exchange rate. The advantage of an overidentified model is that it allows for testing overidentifying restrictions, whereby one can test the validity of overidentified instruments via a GMM estimation of the parameters.
The endogeneity test for the ratio of the elasticities is made on the structural equation (10), which has already been defined We opted for a model without the constant term, for if we include the latter it would no longer be a test for Thirlwall»s Law. According to Cameron and Trivedi (2009), when there are more instruments than regressors (an overidentified scenario) the most efficient estimators are the 2SLS and GMM.
However, in an overidentified model, 2SLS and GMM estimators can lead to different results. The 2SLS estimator is more efficient if the errors F ) are independent and homoskedastic. We first present, nevertheless, the results for the first stage of the estimation, which only make sense for a model estimated by 2SLS. The GMM estimator has the same result for the first stage.
The first stage indicates that the instruments are significant in determining the endogeneity of the tested variable. Specifically, both the "share of hightechnology manufactured goods" and the "real exchange rate" are significant at 1% in determining the ratio of the income elasticities of the demand for exports and imports. The validity of the instruments always demands a more careful analysis. Due to this, in addition to the theoretical discussion presented in the former sections, we chose to perform an overidentifying restrictions test. Besides the latter test, table 3 also presents the results of the structural equation for an overidentified model with two instruments, as indicated in equation (10). The results indicate that the ratio of the elasticities is significant in determining the growth rate. Besides, the Wald test on restrictions rejects the null hypothesis that the estimator of the ratio of the elasticities is equal to unity. It does so at a 10% significance level for the models with adjusted residuals and at 5% for the model with unadjusted residuals, against Thirlwall»s (1979) simplest model. This result reveals that there might be other determinants in the canonical modelsuch as the possibility of capital flows, as Thirlwall and Hussein (1982) have already suggested -or the other factors herein discussed.
The results of the overidentifying restrictions test do not reject the null hypothesis that all instruments are valid, given that : > 0,10 for all identified models: 2SLS (with and without adjusted residuals) and GMM.
In all effect, it was not possible to reject the hypothesis that the share of high technology manufactured goods and the real exchange rate are valid instruments for the ratio of the income elasticities. Nevertheless, and once more, it is important to test for endogeneity, so that the results above (Wald test on restrictions and overidentification) can be more judiciously analysed. The test indicates that, for an overidentified model with two instruments (the share of high-technology manufactured goods and the real exchange rate), the hypothesis that the ratio of the elasticities is an exogenous determinant of relative growth cannot be rejected. This is another empirical evidence in favour of Thirlwall»s Law. Likewise, the level of the real exchange rate is a subsidiary element of growth, particularly as it affects non-price competitiveness.
Conclusions
This paper finds evidence that the level of the real exchange rate is a significant and important determinant of the trade income elasticities. This holds in spite of the fact that the endogeneity tests indicated that the hypothesis of endogenous elasticities is not valid.
Indeed, the level of the real exchange rate affects the ratio of the elasticities by increasing the gains from the sale of tradable goods, profit margins and investment, thus leading to the diversification of the investments and products of sectors that operate in the world market 9 . A managed exchange rate relaxes the constraint and maintains the balance of payments in equilibrium, as it increases the economy»s competitiveness (assuming that the income elasticity of the demand for exports of primary goods is low and that the income elasticity of the demand for imports of manufactures is high). Not only so, it also spurs technological development, in light of its benefits to funding and credit, thus stimulating research and innovation. Consequently, the level of exchange rate can affect the supply-side of the economy in the long-term. This theoretical framework leads to the understanding that the income elasticities of the demand for exports and imports are influenced by the real exchange rate, inasmuch as they depend on technological development and the diversification of production.
Indeed, technological progress in developing countries depends on companies having available funds. In this regard, exchange rate devaluations, as they redistribute income from wages to profits, provide companies with access to larger sums of resources to engage in innovative activities. Therefore, the empirical evidence and the theoretical discussion we present endorse the fact that the level of the real exchange rate plays a subsidiary role in the long-term growth of the economies, particularly developing ones. It should be noted, however, that this is a result of higher ratio of income elasticities, which in turn relax the balance of payments constraint and spur economic growth, and not of a price competitiveness-induced improvement in trade.
Regarding the endogeneity of the elasticities, the results do not reject the hypothesis that the ratio of the elasticities is exogenous. They therefore support Thirwall»s model, and not Krugman»s 45-degree rule (Krugman, 1989). The restrictions test for the validity of Thirwall»s Law implies, however, that the author»s canonical model is not sufficient to explain the growth of the analysed countries. As previously mentioned, this suggests that other variables, such as capital flows and foreign debt, should also be taken into consideration when studying growth. The literature has already identified and tested models for these variables: namely, Thirwall and Hussain (1982) proposed the theoretical model that acknowledges the importance of capital flows in balance of payments constrained growth, and Moreno-Brid (1999) proposed the model that includes a sustainable debt constraint. Future research can include capital flows in the equation, so it can be possible to analyse the relationship among this variable, trade income elasticities, the level of exchange rate, and multi-sectoral Thirlwall»s law.
Finally, the role of the level of the real exchange rate cannot be neglected when analysing demand-led growth in balance of payments constrained growth models, particularly in light of its stimulus to more productive and technologyintensive sectors. In other words, the real exchange rate is one of the determinants of income elasticity.
The first group of countries to be considered are the largest developing countries whose exports consists of at least 70% of manufactured products. This is because the manufactured goods, by hypothesis, have a higher possibility of differentiation that commodities and primary products. This share corresponds to the average of 68% of manufactured exports during the period 1999-2003 reported by UNCTAD (2005). Using this methodology as a reference (share of 70%), countries that meet this criterion were updated considering its export in the last year available. The 18 developing countries that meet the Blecker and Razmi (2007)'s paper were: Bangladesh, China, South Korea, Philippines, Hong Kong, India, Jamaica, Malaysia, Mauritius, Mexico, Pakistan, Dominican Republic, Singapore, Sri Lanka, Taiwan, Thailand, Tunisia and Turkey.
These countries were submitted to the discretion of technological classification of Lall (2001). For the purposes of this work, it is expected that if the sum of the ratings RN, LV, MV and HV is above 70 percent, they are manufactured. Only Jamaica does not meet the criteria and is therefore excluded from the test. Besides Jamaica, Taiwan and Bangladesh do not have enough data, which also justified the exclusion of these countries.
Besides, according to Blecker and Razmi (2007), we considered in this study a sample of industrialized countries consisting of nine of the ten largest importers of manufactured goods from developing nations. United States are excluded from the sample because they are used as a reference for all other countries. Furthermore, Australia is included in this sample. Thus, the countries are Australia, Belgium, Canada, France, Holland, England, Italy, Japan and Switzerland.
Finally, although a sample of countries whose exports manufactured goods is less than 70% are included. These countries were selected according to data availability (for Russia, for example, there is not sufficient data) and because of the importance of the country in international trade The group of developing countries with export basket is less than 70 percent manufactured chosen for the study are: South Africa, Argentina, Brazil, Cameroon, Chile, Colombia, Côte d'Ivoire, Ecuador, Indonesia, Paraguay, Peru Syria and Uruguay. | 8,250.2 | 2019-09-04T00:00:00.000 | [
"Economics"
] |
Origin of Probability in Quantum Mechanics and the Physical Interpretation of the Wave Function
The theoretical calculation of quantum mechanics has been accurately verified by experiments, but Copenhagen interpretation with probability is still controversial. To find the source of the probability, we revised the definition of the energy quantum and reconstructed the wave function of the physical particle. Here, we found that the energy quantum ê is 6.62606896 ×10 -34 J instead of hν as proposed by Planck. Additionally, the value of the quality quantum ô is 7.372496 × 10 -51 kg. This discontinuity of energy leads to a periodic non-uniform spatial distribution of the particles that transmit energy. A quantum objective system (QOS) consists of many physical particles whose wave function is the superposition of the wave functions of all physical particles. The probability of quantum mechanics originates from the distribution rate of the particles in the QOS per unit volume at time t and near position r. Based on the revision of the energy quantum assumption and the origin of the probability, we proposed new certainty and uncertainty relationships, explained the physical mechanism of wave-function collapse and the quantum tunnelling effect, derived the quantum theoretical expression of double-slit and single-slit experiments.
Introduction
As one of the pillars of modern physics, quantum mechanics has many notable achievements, with its theoretical calculation verified by experiments.However, its physical nature is still controversial.Studies on the counterintuitive phenomena of quantum mechanics, such as the quantum superposition state, wave-function collapse, quantum tunnelling effect, uncertainty principle, double-slit interference, and single-slit diffraction, have not reached a consensus.From a practical point of view, if the calculation results are correct, then the physical mechanism does not need to be debated.However, to understand nature, revealing the essence of matter and its interactions is the fundamental task of physics.Therefore, quantum mechanics must be studied to explore the essence of the material world.
The quantum superposition state, wave-function collapse, the quantum tunnelling effect, the uncertainty principle, double-slit interference, single-slit diffraction, and other counterintuitive phenomena are seemingly related to the probability of quantum mechanics, which has been physically interpreted by the Copenhagen school.
Although Planck, Einstein, Schrödinger, and other physicists strongly opposed it, they did not propose a better interpretation.In quantum mechanics, the probability is based on Born's hypothesis, which lacks physical sources; thus, Steven Weinberg has repeatedly questioned that the integration of probability into physics has confused physicists, but the difficulty of quantum mechanics related to its source rather than probability since 2016.Therefore, to understand the counterintuitive phenomenon of quantum mechanics, one must trace the origin of probability in quantum mechanics.
Smallest unit of energy -the energy quantum
Planck 1 proposed an energy density Formula (1) for black-body radiation in 1900, consistent with experimental data.Based on an assumption that the resonator of the black-body radiation source discontinuously radiates energy according to the smallest unit of , he theoretically derived the following formula. ν where is the energy density of black-body radiation; is the frequency of the resonator; c is the speed of light; k is the Boltzmann constant; T is the thermodynamic temperature; and h is the Planck constant.
In the process of deriving Formula (1), Planck 2,3 obtained Formula (2), which shows the energy radiated by the resonator of the black-body radiation source.
, n=1, 2, 3… ( Based on Formula (2), Planck believed that the energy radiated by a black body is discontinuous and thus can be taken only as an integral multiple of .If n=1, the resulting energy is the smallest energy unit possible (i.e., it cannot be subdivided), defined as the energy quantum.According to the definition of this energy quantum, de Broglie assumption that the energy E of a physical particle is equal to the energy of an energy quantum hν is clearly illogical, and it is also difficult to logically establish the wave function of a physical particle.
In the assumption of , Planck sets as the minimum energy released by the resonator of the black-body radiation source.Moreover, the dimension of the coefficient h is Joule•sec (J.s) to maintain the dimension of energy of when the dimension of the vibration frequency of the resonator is s -1 .However, the assumption of shows that the resonator, which radiates energy externally, is actually doing work outside.The minimum amount of power of a single resonator of a black-body radiation source is , where the dimension of is J.s -1 and the dimension of the coefficient h is Joule (J).Based on the above analysis, if both sides of Formula (2) are divided by unit time T 0 (1 s), then Formula (2) becomes Formula (3). ( Let and ; then where E is the energy radiated externally by a single resonator of the black-body radiation source per unit time, which is the power with dimension J.s -1 .If n = 1 in Formula (4), then Formula (5) holds.
J (5) Formula (5) shows that ê is the smallest unit of energy radiated by a single resonator of a black-body radiation source.We define the minimum unit of energy ê as an energy quantum with the unit of Joule.Clearly, the energy of the material world is discontinuous, and the energy quantum is ê instead of hv, with a value of 6.62606896 ×10 -34 J.The energy of the energy quantum is a physical constant that cannot be subdivided and changed and is not related to time or any other factor.However, the period of the corresponding vibration can vary.
Based on Planck's hypothesis, A. Einstein 4,5 believed that light wave energy is discontinuous and that the smallest unit of each discontinuous and indivisible packet of energy is hν, named the photon.
Everyone knows that the radiation of a black-body radiation source takes the form of electromagnetic waves, namely, light waves.According to the definition of the energy quantum, the minimum energy of a period of an electromagnetic wave radiated by a vibration of a resonator in a black-body radiation source is equivalent to the energy of an energy quantum, with a value of 6.62606896×10 -34 J, which is transmitted outward at the speed of light.Therefore, we defined the minimum energy transmitted by a periodic electromagnetic wave as a photon, then, the energy of a photon is instead of of Einstein's hypothesis.As a physical constant that cannot be divided and changed, it is not related to time or any other factor.However, the period and wavelength of the corresponding electromagnetic wave can vary.
Smallest unit of quality -the quality quantum
Einstein proposed that the energy and quality of matter are equivalent; Formula (6) shows their relationship. ( where E is the energy; m is the quality; and c is the speed of light in a vacuum. When the energy takes the energy quantum ê, an indivisible minimum unit of quality exists, namely, the quality quantum ô.Formula (7) shows the relationship between ê and ô. (7) Substituting the values of the energy quantum ê and the speed of light c into Formula (7), the quality quantum ô is 7.372496×10 -51 kg.In addition, the photon rest mass is zero, with a total mass of 7.372496×10 -51 kg.In the material world, the minimum value of the rest mass of physical particles other than photons is ô.Therefore, the mass of photons and physical particles can increase or decrease only following the integral multiples of the quality quantum.Mass and energy discontinuities are the essences of the material world.If the energy quantum, quality quantum, and light speed are regarded as natural constants of the material world, then Formula (8) shows their relationship. (8)
Wave function of photons
According to electromagnetic theory, the electromagnetic wave function is as shown in Formula (9). ( where ψ and ψ 0 are the amplitude and maximum amplitude of the electromagnetic wave, respectively; ν and are the frequency and wavelength of the wave, respectively; and and t are the spatial position and time, respectively.
According to its definition, a photon's energy is ê, which corresponds to one period and one wavelength of electromagnetic wave energy in a single electromagnetic wave.
n( ) photons with a frequency ν are superimposed to form a superimposed photon, the energy E and momentum p of which are defined in Formulas (10) and Formula (10) shows that the superimposed photon is a photon as defined by Einstein.
ℏ (14) Formula ( 14) is a function relation that characterizes the spatial propagation of superimposed photon streams after the electromagnetic wave function is quantized.In the wave function (14) of photon streams, ψ and ψ 0 represent the intensity amplitude and the maximum intensity amplitude of energy transmitted by the photon streams, respectively.The modular square of ψ is the energy density of the photon streams.
Since Formula (9) is the wave function of a monochromatic electromagnetic wave with frequency ν, Formula ( 14) is the wave function of a single energy superimposed photon stream with frequency ν.More complex electromagnetic waves are mixed with multiple frequencies in the objective world, the wave function of which is formed by the linear superposition of the monochromatic electromagnetic wave functions, expressed as where ψ 0νj represents the maximum intensity amplitude of electromagnetic waves with frequencies ν j in the composite electromagnetic waves and j is a positive integer from 1 to ∞.
Formula ( 16) is the photon stream wave function at the composite frequency quantized by the composite electromagnetic wave function (15).
where ψ 0pj represents the maximum intensity amplitude of energy transmitted by the photon stream with momentum p j in the composite photon stream and ψ represents the energy amplitude delivered by the composite photon stream.In addition, the modular square of the wave function ψ of the composite photon stream is the energy density transmitted by the composite photon stream.
The quantization of the electromagnetic wave function does not change the nature of the energy transmitted by photons and electromagnetic waves, while the energy density corresponds to the number of photons in space and time.In the whole space and time range of electromagnetic waves and photon motion, N is the total number of photons, and N j is the number of photons with momentum p j .Let |C Ppj | 2 =N j /N, which represents the photon distribution rate with momentum p j .Since the photons move throughout the entire space-time range with a constant total number, the sum of the distribution rates equals 1.Then, Formula (16) can be transformed into Formula (17).
ℏ ,j=1, 2, 3… (17 where ψ represents the total momentum photon distribution rate amplitude at time t in space r.Formula (17) expresses the function relation of the distribution rate amplitude, which periodically changes with time and the spatial position when photons move in space, i.e., the wave function of photon motion.Moreover, the modular square of the wave function equals the photon distribution density at time t in position r.
Wave function of physical particles
As a particle, photons have zero rest mass, while the photon velocity at all frequencies in a vacuum equals the speed of light c.Energy transmitted by photons has periodic changes and fluctuations in space and time directly related to the number of photons.
Additionally, the energy fluctuation causes the photon distribution rate amplitude to change periodically with time and space.The photon distribution density also fluctuates, as represented by the wave function (17).
A moving physical particle has energy; thus, it is a particle that transmits energy.
Regarding energy transfer, physical particles are similar to photons, and their rest mass is not zero, with a moving speed less than that of photons.According to Formula (8), the energy quanta transmitted by the physical particle are proportional to the number of the quality quanta contained Therefore, the distribution densities of the energy and quality quanta at time t and position r are equal.
When the energy transmitted by a particle equals that of a superimposed photon of the same frequency, the energy and momentum of the latter in Formula (17) can be transformed into those of the former.In addition, |C pj o | 2 represents the distribution rate of the particles' quality quanta with the momentum p j in a multi-particle system, where C pj o represents the distribution rate amplitudes of the quality quanta.
Regardless of the interaction between particles, the wave function of the superimposed photons becomes that of the multi-particle system.Therefore, the multi-particle system is a quantum objective system (QOS), and Formula (18) is its wave function.
where ψ represents the distribution amplitude of the quality quanta of the particles at time t in space r.Meanwhile, the modular square of the wave function ψ equals the distribution density of the particles' quality quanta at time t and position r.
In addition, if all particles have the same quality, ψ represents the amplitude of the distribution rate of the particles at time t in space r, which changes periodically with time and space.Moreover, the modular square of the wave function ψ equals the distribution density of the particles at time t and position r.Formula (18) is the QOS wave function.
Since the total number and energy of particles in space are constant, the particle
Origin of probability in quantum mechanics
Formula (18) is the wave function of a QOS composed of all state particles with the same quality.Regardless of the particle interaction, each particle has its energy, momentum, and state of motion.Considering a particle in the QOS, we do not know which one of the many in the system it is before measurement.Thus, it may be any one of the QOS.To facilitate a comprehensive study of this particle, we built a mathematical system artificially, called a quantum mechanics system (QMS).In the QMS, we study only one particle, which may be any particle in the QOS before measurement.The particle may correspond to all state particles in the QOS, meaning that the particle in the QMS has the possibility of all state particle distribution density at time t in space r in the QOS.We refer to the distribution rate of the particles in the QOS as the probability of the particle occurrence at time t in space r in the QMS, which is the origin of the Born probability hypothesis 6 in quantum mechanics.
Therefore, for the particle in the QMS, Formula (18) is the wave function.The particle wave function in the QMS is represented as Formula (20) to distinguish the QOS wave function.
If the wave function of the QMS is continuous, Formula (20) can be expressed in integral form (21). Since the particle always exists in the whole space, the particle probability in that space equals 1.
Since the QOS has no concept of probability, no randomness exists.However, the QMS introduces probability through artificial mathematical operations; thus, randomness is introduced.
Revision of the uncertainty principle and physical nature
In a QMS, Heisenberg's uncertainty relation, shown in Formula (25), is deduced based on the state superposition principle of the wave function, Born's probability hypothesis, and the non-commutation relation of operators.
p ℏ 2 (25) where and p are the uncertainties of the position and momentum of a particle, respectively.The physical meaning of the uncertainty relation is that a particle's position and momentum cannot be determined simultaneously.It indicates that the particle does not have a defined trajectory of motion.
Based on the essence of the state superposition principle in a QMS and the origin of the Born probability, and in Formula (25), where x and p are the measured values of the determined particle position and momentum, respectively; and are the average values of all possible positions and momenta in the QMS, which can be calculated by Formulas ( 26) and ( 27), respectively, and correspond to the average value of all particle positions and momenta in the QOS, rather than that of the measured particle.Therefore, and p indicate the deviation degree of the particle from the system (namely, the standard deviation) instead of being the measurement errors of the particle position and momentum.Use of the system's standard deviation to indicate the measurement error of the position and momentum of a particle is not appropriate. (26) where and are the position and momentum operators, respectively.
In a QMS, a particle can be measured by calculating its wave function with the operator.However, the wave function is expressed by the superposition state of particles with various states in the QOS; the position and momentum obtained by each measurement may be different; and the position and momentum of another particle are used to represent those of the particle to be measured.Therefore, in Formula (25), the position and momentum of a specified particle should not be expressed with those of another particle.
Based on the above analysis, the mathematical treatment of the state superposition the probability amplitude of a particle at time t in space r, which changes periodically with time and space.Meanwhile, the modular square of the wave functions represents the probability density of the particle appearing at position r at time t.The wave function is exactly the same as de Broglie7 wave function.From a mathematical point of view, Formula (20) and (21) indicate that the wave function of the particle in the QMS is always on a linear superposition state in all states of the particles in the QOS, and this is the physical nature of the state superposition principle of the wave function.Particles in different states in the QOS have different momenta and energy, indicating different moving velocities.A particle swarm composed of multiple particles expands in space over time.Therefore, the continuous expansion of the space volume occupied by the QOS is the physical nature of the continuous expansion of a particle's wave function in the QMS. | 4,093.6 | 2020-10-23T00:00:00.000 | [
"Physics"
] |
Intelligent reliability management in hyper-convergence cloud infrastructure using fuzzy inference system
Hyper-convergence is a new innovation in data center technology, it changes the way clouds manage and maintain enterprise IT infrastructure. Hyper-convergence is more efficient and basically agile technology environment. Cloud computing is a latest technology due to provision of latest cloud services over the internet. The cloud service providers cannot promise accurate reliability of their services i.e. problem in provisioning of software or hardware failure etc. Reliability of cloud computing services depends on the ability of fault tolerance during the execution of services. There are so many factors can cause faults, such as network failure, browser crash, request time out or hacker attacks. When users are facing these types of faults, they usually resubmit their requests. However, if there is any key element involved in faults or errors, additional action may be needed to deal with system logs. If there is anomaly behavior occurred in faulted virtual machine, these VMs may need extra attention from cloud system protection and security point of view. In this paper, provision of reliability management in hyper-convergence cloud infrastructure is proposed and self-healing techniques in software as a service on the basis of failure in cloud services. Intelligent cloud service reliability framework will increase the reliability during execution of cloud service.
Introduction
In current era, we are living in the universe where data is being produced day by day on demand, on command by the organization, institutions and many other firms.The amount of huge data that we consumed and produced by different smart devices.Devices like, smart phone, computers and sensors.Most of the tools and individuals are at high rate and produce huge amount of data.It is difficult to manage huge amount of data [1].New technologies introduced to overcome these issues.To overcome these issues and making our work cost effective, we use cloud computing services.Cloud computing is cheap and pay-per-use, this attitude of cloud computing putting resources over cloud infrastructure.Cloud computing service uses internet to provide service to consumer and use data center to host applications.Cloud services are available for consumer as pay-per-use, quality and services over the internet [2].
Essential characteristics of Cloud Computing
Some of the most important characteristics of cloud computing is following: that the concept of a ranking system came into being.This ranking system receives the requests from different users, which may differ w.r.t their requirements.Then, this system will look for some services Nadia Tabassum 1,* , Muhammad Saleem Khan 2 , Sagheer Abbas 2 , Tahir Alyas 3 , Atifa Athar 4 and Muhammad Adnan Khan 2 for users and assign a possible rank according to the Quality of Service (QoS) [3].
1.1.1.On-Demand Service: On-demand service is a model or technique in which we just provide the cloud users the facility that they can get their services from service providers anytime at any place.The users can get this facility to fulfil its work or any application so the users can have this facility to avail the services on demand.The service providers provide these resources to their users at any time [4].
1.1.2.Broad Network Access: By its name we can get the idea of it the cloud system is just used in broad network area so that everyone can get access their services.Most of the companies can use this facility to remain updated to their clients or other organizations by using the cloud services and anyone can use these services but it is depend on the network used like whether it is private or public it means if you use the private cloud than all the information will be under the members of the private members those are sign in to the cloud services but if it is public network than anyone can get the access of about any information and anyone can get the advantage of the services provided to the cloud users [5].
Resource Pooling:
Resource pooling is a technique in which the consumer can acquire and release the resource when it will be required on demand.The PaaS users can get the resource from the resource pool on demand so that the user can make use of this resource and then give it back to the resource pool.It also reduces all the complexities that cloud has to face by using resource pooling techniques [6].
Rapid Elasticity:
Rapid Elasticity refers to enhancement of cloud services without effecting the cloud users and buyers all the services in a very reliable and flexible manner.Cloud just provides their services to their users very easily and their users can easily get the services and get advantage of these services on demand.The cloud users can get extra storage space to have more resources from cloud provider in this way they can use more services [7].
Measured Service:
Measured service means paying cloud service cost as per their usage.It also known as metered service.In measured services all the problems and faults are being controlled and monitored.Measured service is a term that IT experts apply to distribute computing
Cloud Service Models
The most commonly used service models through which the cloud system provides different services to the users consumers are following: 1.2.1.Infrastructure-as-a-Service: Infrastructure is just like a support or foundation which is used to provide these types of services to your customers and cloud users like by using infrastructure u can give resources to the equipment's that are used in different works like virtualization, storage area, networking etc.You can easily give these resources or facilities to your customers and cloud buyers and users to enhance your services provided to the customers.You can increase your storage area and the network speed provided to the customers so that they can easily get use the benefits of the best networking speed available [8].
Platform-as-a-Service:
Platform as a service is a worldview for conveying working frameworks and it is used to provide the cloud users all the facilities through internet which is a worldwide network where we can access and download anything but the difference is quite simple that these facilities are provided by the cloud service providers and you just need to pay for what you used on cloud.PaaS gives a stage apparatuses to test, create, and have applications in a similar domain [9].
Software as-a-Service:
Software as a service is used in which the user does not need to download any application on its own computer because now the SaaS is available through which we can access any application not just only on our own computer but at any place.It is a model in which the other person through internet gets access the other persons and they can access these all application on internet.It also decreases the expenses of installation, provisioning cost etc [10].
Data reliability in cloud computing
Reliability means working the virtual machines, if the exceptions and malfunctioning occur.The system functioning is error free and in good conditions for service delivery.Service consumers consider reliability as proper functioning, security, and ease of use.Service providers also consider reliability in service creation, deployment, integration and separation.[11] The reliability in cloud computing environment also includes providing proper functioning in different stages in service lifecycle.Service integration and separation allow service providers to offer both full set of functionality and part of functionality to service consumers according to service level agreements.The reliability covers various aspects of cloud computing.The base line of the reliability is to provide functioning services.[12] 1.
Types of Failure in cloud computing
In conventional software engineering reliability, four main approaches to design a reliable software system.These four approaches are fault prevention, fault removal, fault tolerant, and fault forecasting.However, in cloud computing environment, cloud applications only accept faultprevention techniques and fault removal techniques to develop fault-free software as service [13].
The large-scale cloud services involve large number of virtual machines and middle ware layers.Failures of these components affect reliability of cloud applications directly.
The idea of most cloud service providers deploys their services in large data centers.All of services are running in virtual machines that reside in physical machines.There are usually multiple virtual machines running in one physical machine.When a virtual machine is initialized, the administrator or virtual machine monitoring system gets resources from a resource pool to build requested virtual machine.Reliability is cloud computing under the different conditions like network resources, latency and cloud monitoring can result in low performance.Such conditions must be observing by autonomous system to avoid the delivery of cloud services failures [14].
Figure 1. VM Checker
In order to enhance reliability, we need to identify faults.system event logs record most of system events that include system fault related events.We can trace system faults through system event logs.There are always system critical events happened before system enter fault states as shown in figure 1.Therefore, if a system could predict system critical events, it can predict system faults before they really happen.Researchers dig into this problem from different aspects.Following study presents several techniques for system fault monitoring.Through machine learning techniques, we can find some patterns that always appear when system faults occur.Statistical data is used in mining and detecting fault patterns in service event logs are normalized according to domain information in Memory Module.
Related Work
Cloud computing has allowed the efficient management, deployment and configuration of clusters where the aforementioned frameworks can be deployed by taking advantage of both the elastic nature of the cloud, where reserved resources are used for as long as they are needed (i.e., pay-as-you-go), and the deployment/management ease of use.To exploit these two properties and fulfil enterprise needs for minimizing infrastructure maintenance and operation costs, companies follow a similar approach for either public or on premise cloud offerings: they utilize a service that launches and manages big data clusters in order to execute the requested workloads whereas a single storage back-end is used to host data [15].
Ian andrusaik proposed framework in his paper titled -A Reliability-Aware Framework for Service-Based Software Development‖ allows the idea of hyper-convergence is to simplify operation and management of data centers by converging the computing, storage and networking components into a single, software-driven appliance.It's defined as an IT infrastructure framework in which storage, virtualized computing and networking are tightly integrated within a data center.A prototype implementation has been developed as a proof of concept of the design which has been evaluated and showed that the system is successful at providing availability when failure occurs at a cost to overall performance [16].Different autonomic monitoring system is proposed by the researchers.Monitoring framework is proposed by [17] for the web service-enabled applications.In paper he discussed the hardware and software resources in the form of web service-enabled.The elasticity of both hardware and software resources are monitored.In autonomous system, a self-optimized monitoring algorithm is proposed which updating dynamic information and self-adaptive events.
Fault prevention and fault tolerance intend to give the capacity to convey a service that can be trusted, while fault expulsion and fault forecasting mean to achieve trust in that capacity by defending that the utilitarian and the steadfastness and security details are sufficient and that the framework is probably going to meet them.It is significant that repair and fault tolerance are connected ideas; the Intelligent reliability management in hyper-convergence cloud infrastructure using fuzzy inference system refinement between fault tolerance and support in this paper is that upkeep includes the cooperation of an external agent [18].
In a cloud storage framework, many components like storage, services, hardware can result in data failure.Data failures additionally prompt cloud benefit failures.The fundamental causes of cloud data failures hardware, system, software and power failures.Data reliability incorporates augmenting solidness and accessibility of data.Durability mitigates perpetual failures and accessibility mitigates transient failures.
For automation in reliability monitoring, an agent-based approach is helpful where diver diverse provision of software services is required.This approach will support in automated system in every unconditional situation where software behaviour possible to specify.In autonomous situation, agent can evolve, learn, cooperation with entities and negotiation can perform.Expanding system required agent behaviour role while rapid change occur [19].
When the indexing procedure is going on in cloud services, the key factor is that the requirement of the user should be satisfied.Such kind of framework is desired that will fulfil these requirements.By looking on to the above figure, it is known that indexing manager will receive the information and after that, process it according to the ranking parameters like performance, usability, and cost.Indexing Manager will consider it for the best service as desired by user necessities.Indexing Administrator will also be answerable for other activities as well, i.e. taking characteristics for ranking, the track record of characteristic value, and ranking result [20].
Transformation of hyper-converged
The journey of transformation is started from the conventional system.In traditional system, all modules need different skill to manage.All entities configured and tested separately as shown in figure 2. In converged era, the hardware defined infrastructure was introduced along with monitoring software and backup.In hyper-converged, all server components on single unit and integrated through software defined environment.All the components are readily available and ready to use [21].
Data centre is a facility that contains several computers that are connected together for the purpose of storing and transmitting data.The facility is designed to be used by several peoples and is equipped with hardware, software, peripherals, power conditioning, backup, and communication and security systems.Different architectures of data centres involves Traditional infrastructure (TI), Converged infrastructure (CI) and Hyper Converged infrastructure (HCI) [22].
Proposed Methodology
In this proposed model, the first it will collect the service history, service weight, QoS parameters, execution time.After collecting the service detail, it will analyzed the virtual machines status and calculate the utility, if the existing pattern are available in Memory Module then no need for new plan.In provision of reliable services in cloud computing, system will determine the planning strategy in decide phase.At the plan phase, the system will generate the plan according to the situation and finally execute the plan.
The proposed framework of Intelligent Cloud Service Reliability management as shown in figure 3 Intelligent reliability management in hyper-convergence cloud infrastructure using fuzzy inference system 1.
Service Event Log Check 3.
Network Resource Check
The Service Monitoring Agent is responsible for monitoring provision of cloud services over the network.
When cloud services are provisioning over the network through user interface, the service are being execute will monitor through our proposed intelligent cloud service reliability framework.In Service Monitoring Agent, service monitoring as well as service analysis will be monitored.In service examination layer, there are three sub modules to monitor the virtualized cloud service through different check points.
In service examination layer, the cloud service will be examined through Network resource check.
Figure 3. Proposed Intelligent reliability management Framework
In Virtual machine checking, the health status of all virtual machine will be monitored through VM checker as shown in figure .3. Fabric controller monitor all the virtual machine through fabric agent.If any virtual machine is not working then the fabric agent will report to fabric controller for alternate virtual machine to provide error free service delivery over the network.Similarly, the fabric controller will check the status of host machine.Service Event Log checking will responsible for the auditing of cloud service in the form of log checking and log will be analyzed with the history checking in service usability layer.Network resource check include the physically resources during provision of cloud services.
Service usability Layer
This layer is responsible for the cloud service history analysis.This layer is divided into two sub layers.
History analysis 2. Event analysis
History analysis layer will keep record the failure types and stored in Memory Module for future failure predictor.
Event analysis will be responsible for the event failure status and send status for event prediction to recover such type of failures.
Self-Healing Layer
This layer will responsible for recovery of failure components in cloud service provision.This layer will keep track of all the available virtual machines and recovery of failure by keeping the track of such failures and learn the previous event failures.Self-healing layer is divided into two sub modules 1. Switching 2.
Re-composition
In provision of reliable cloud service provision, the switching will be performed if, the recovery of virtual machine is not working after re-composition.The fabric controller will activate the new cloud service after checking the service examination layer parameters.
While service healing process the recovery of service is subject to system security analysis.The required service is for the same host or system.During healing process, the system will check the service examination in term of virtual machine, Host machine, designation machine, current service status, network resource.
Implementation and Evaluation
In this section Mamdani Fuzzy Inference System is used to simulate the proposed reliability management in cloud computing.The description of fuzzy inference system (FIS) is explained in Figure 4.
Inference Engine
Characterizes administrators and defuzzifier utilized as a part of the surmising procedure..Eq-1 the rule viewer shows that the reliability is at re-composition state, where virtual machines need to re-compile again.
Simulation and Results
MATLAB 2017b is used for simulation purpose.Intelligent reliability management in hyper-convergence cloud infrastructure using fuzzy inference system EAI Endorsed Transactions on Scalable Information Systems Online First If SRT value is 5, VMU value is near about 5, event log value is 5, NRU value is 5.5 then the reliability will be in Low as shown in figure 8.
If SRT value is 1.22, VMU value is near about 10, event log value is 3.7, NRU value is 10 then the reliability will be medium as shown in figure 9.If SRT value is 10, VMU value is near about 9, event log value is 9, NRU value is 8 then the reliability will be high as shown in figure 10.
Intelligent reliability management in hyper-convergence cloud infrastructure using fuzzy inference system EAI Endorsed Transactions on Scalable Information Systems Online First ] }
Conclusion
In cloud computing, service providers always want to provide reliable services to customers or service consumers.However, there are obstacles between service providers and consumers.Customers need customized services with various configurations, these customizations and configurations are error free and seamless functionality during service execution.In software as a service, monitoring of virtual resources are hard to control the stability of their service, especially from end to end service provision.
Figure 2 .
Figure 2. Evolution of hyper-convergence consists of five major modules. Service Monitoring Agent Self-Healing Layer Service usability Layer Learning Module Memory Module 3.1 Service Monitoring Agent Service Monitoring Agent is divided into three type of monitoring checks Nadia Tabassum et al.
Figure 4 .
Figure 4. Proposed Fuzzy Inference System for IRM For the reliability management in cloud computing, Service response time (SRT), Virtual machine utilization (VMU), Event Log and network resource check will perform
Figure 6 .
Figure 6.Rule base for Proposed IRM
Figure 7 .
Figure 7. Rule surface for Reliability Management
Figure 8 -
10 shown the performance of the proposed Intelligent Reliability Management system in terms of low, medium & high reliability.
Figure 10 .
Figure 10.Lookup diagram for High Reliability EventLog and NRU.The parameter of SRT explains if the response time of cloud execution is low then the reliability in cloud is high.Three membership variables for SRT is used low, medium and high which explain the degree of reliability.We can say the reliability is high if response time of SRT is low, medium and high.The second input variable is Virtual machine utilization (VMU) having three membership functions low, medium and high.The third input variable is EventLog having three membership functions No, Yes and Critical.If the value of EventLog is no then no uncertainty is seen during cloud service execution.If the value of EventLog is Yes then cloud service is not functioning during cloud service execution.
Table 1 .
4.2.Membership FunctionsGraphical & mathematically representation of the above mentioned I/O MF of AFIS Input variables are shown in table 7. Detail of each input variable is explained in Table.2-6.In Table 1 showed the possible outcome of the four inputs.The detail are explained in table 7. Proposed Lookup table for IRM
Table 7 .
Mathematical & Graphical MF of AFIS Input/output variables
Table 8 .
Mathematical & Graphical MF of AFIS Input/output variables | 4,676 | 2018-07-13T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Altered Mitochondrial Dynamic in Lymphoblasts and Fibroblasts Mutated for FANCA-A Gene: The Central Role of DRP1
Fanconi anemia (FA) is a rare genetic disorder characterized by bone marrow failure and aplastic anemia. So far, 23 genes are involved in this pathology, and their mutations lead to a defect in DNA repair. In recent years, it has been observed that FA cells also display mitochondrial metabolism defects, causing an accumulation of intracellular lipids and oxidative damage. However, the molecular mechanisms involved in the metabolic alterations have not yet been elucidated. In this work, by using lymphoblasts and fibroblasts mutated for the FANC-A gene, oxidative phosphorylation (OxPhos) and mitochondria dynamics markers expression was analyzed. Results show that the metabolic defect does not depend on an altered expression of the proteins involved in OxPhos. However, FA cells are characterized by increased uncoupling protein UCP2 expression. FANC-A mutation is also associated with DRP1 overexpression that causes an imbalance in the mitochondrial dynamic toward fission and lower expression of Parkin and Beclin1. Treatment with P110, a specific inhibitor of DRP1, shows a partial mitochondrial function recovery and the decrement of DRP1 and UCP2 expression, suggesting a pivotal role of the mitochondrial dynamics in the etiopathology of Fanconi anemia.
Introduction
Fanconi anemia (FA) is a rare genetic disorder, recessive autosomal or X-linked [1]. FA phenotype can be very heterogeneous, with clinical manifestations ranging from congenital malformations to metabolic dysfunction susceptibility and increased risk of cancer developing, particularly leukemia and squamous cell carcinoma [2,3]. However, bone marrow failure and aplastic anemia are the most common causes of death in FA patients, which develop at a younger age and with a 5000-fold increased risk compared to the healthy population [3]. So far, 23 genes have been identified as involved in FA, with 90% of mutations occurring in FANC-A, FANC-C, or FANC-G genes [4], where FANC-A represents accounts for two-thirds of cases. FANC genes encode proteins that assemble into a complex involved in DNA damage repair, specifically interstrand cross-links [5,6]. Recently, FA pathogenesis has also been associated with mitochondrial defects, which cause a metabolic shift toward anaerobic respiration, lipid accumulation, and unbalanced oxidative stress [7][8][9]. Specifically, the electron transport between respiratory complexes I and III is impaired [10], triggering elevated reactive oxygen species production (ROS) not counteracted by cellular antioxidant defenses [11,12]. These metabolic dysfunctions are associated with altered mitochondrial morphology, as mitochondria appear more swollen and with less defined cristae [10,[13][14][15][16]. Although literature confirms the metabolic dysfunction in FA [17][18][19][20], the molecular mechanisms correlating FANC genes' mutations with these defects are still to be clarified. Since the mitochondrial metabolism extent and its efficiency are strictly related to the mitochondrial network shape [21], several authors have described a defect in the mitochondrial dynamic, autophagy, and mitophagy processes [22][23][24][25].
Thus, this work aims to deeply investigate whether the mitochondrial metabolism alterations observed in FA depend on altered expression of proteins belonging to the oxidative phosphorylation (OxPhos) machinery or mitochondrial biogenesis and dynamics modulators in lymphoblasts and fibroblasts carrying the mutated FANC-A gene, comparing the results with isogenic-corrected FANC-A gene cell lines.
Results show that cells carrying the FANC-A mutation exhibit an overexpression of UCP2 and DRP1, suggesting that the altered mitochondria metabolism could depend on oxidative phosphorylation uncoupling and an imbalance toward mitochondrial fission. Treatment with P110, a specific inhibitor of DRP1 [26], can partially reverse the metabolic dysfunction and the organization of the mitochondrial network. Furthermore, FA cells show lower expression of Beclin1 and Parkin.
FANC-A Lymphoblasts and Fibroblasts Display Damaged Mitochondria Unable to Conduct an Efficient OxPhos
As previously reported in the literature [7,10,12,15,18], lymphoblasts and fibroblasts carrying the FANC-A gene mutation are characterized by an altered pyruvate/malateinduced oxygen consumption rate (OCR), partially compensated by respiration led by the complex II pathway (Figures 1A-C and 2A-C). The dysfunctional respiration is associated with the ATP synthesis decrement (Figures 1D and 2D) and a reduced OxPhos efficiency, as shown by the P/O values ( Figures 1E and 2E). The impaired OxPhos function depends on an altered electron transfer between respiratory complexes I and III ( Figures 1F and 2F). In addition, electron microscopy analysis on FA lymphoblasts shows swollen mitochondria with a disorganized inner membrane ( Figures 1G and 2G).
Dysfunctional Mitochondria Metabolism in FANC-A Cells Does Not Depend on OxPhos Protein Expression Alteration but Appears Correlated with Increased UCP2 Expression
To understand whether defective OxPhos in FA cells depends on an altered expression of respiratory chain proteins, Western blot analyses were performed against ND1 (a Complex I subunit, mitochondrial DNA-encoded), SDHB (a Complex II subunit, nuclear DNA-encoded), MTCO2 (a Complex IV subunit, mitochondrial DNA-encoded), and β subunit of ATP synthase (a subunit of F 1 moiety nuclear-DNA encoded) in lymphoblast ( Figure 3A) and fibroblast ( Figure 3B) cell lines. Data do not show significant differences in the expression of these proteins in both FA cell models compared to FAcorr. By contrast, FANC-A lymphoblast and fibroblast cell lines display a significant increase in uncoupling protein 2 (UCP2) expression in FANC-A mutated cells compared to their respective controls ( Figure 3A Representative electron microscopy imaging of FAcorr and FA lymphoblasts to evaluate the mitochondrial morphology. Black scale bars correspond to 1 µm. Data reported in Panels A-E were obtained using pyruvate/malate or succinate as respiring substrates. Data reported in Panels A-F are reported as mean ± SD, and each panel is representative of at least three independent experiments. Statistical significance was tested opportunely with an unpaired t-test or one-way ANOVA; *, **, and **** represent a significant difference for p < 0.05, 0.01, or 0.0001, respectively, between FA and FAcorr cells. (E) P/O ratio, representing an OxPhos efficiency marker. (F) Electron transfer between respiratory complexes I and III. (G) Representative electron microscopy imaging of FAcorr and FA lymphoblasts to evaluate the mitochondrial morphology. Black scale bars correspond to 1 µm. Data reported in Panels A-E were obtained using pyruvate/malate or succinate as respiring substrates. Data reported in Panels A-F are reported as mean ± SD, and each panel is representative of at least three independent experiments. Statistical significance was tested opportunely with an unpaired t-test or one-way ANOVA; *, **, and **** represent a significant difference for p < 0.05, 0.01, or 0.0001, respectively, between FA and FAcorr cells. Representative electron microscopy imaging of FAcorr and FA fibroblasts to evaluate the mitochondrial morphology. Black scale bars correspond to 1 µm. Data reported in Panels A-E were obtained using pyruvate/malate or succinate as respiring substrates. Data are reported as mean ± SD, and each panel is representative of at least three independent experiments. Statistical significance was tested opportunely with an unpaired t-test or one-way ANOVA; *, **, and **** represent a significant difference for p < 0.05, 0.01, or 0.0001, respectively, between FA and FAcorr cells.
Dysfunctional Mitochondria Metabolism in FANC-A Cells Does Not Depend on OxPhos Protein Expression Alteration but Appears Correlated with Increased UCP2 Expression
To understand whether defective OxPhos in FA cells depends on an altered expression of respiratory chain proteins, Western blot analyses were performed against Black scale bars correspond to 1 µm. Data reported in Panels A-E were obtained using pyruvate/malate or succinate as respiring substrates. Data are reported as mean ± SD, and each panel is representative of at least three independent experiments. Statistical significance was tested opportunely with an unpaired t-test or one-way ANOVA; *, **, and **** represent a significant difference for p < 0.05, 0.01, or 0.0001, respectively, between FA and FAcorr cells. and β subunit of ATP synthase (a subunit of F1 moiety nuclear-DNA encoded) in lymphoblast ( Figure 3A) and fibroblast ( Figure 3B) cell lines. Data do not show significant differences in the expression of these proteins in both FA cell models compared to FAcorr. By contrast, FANC-A lymphoblast and fibroblast cell lines display a significant increase in uncoupling protein 2 (UCP2) expression in FANC-A mutated cells compared to their respective controls ( Figure 3A (Complex III), and ATP synthase β subunit; WB signal of uncoupling protein 2 (UCP2) expression. Data in histograms are reported as mean ± SD, and each panel is representative of at least three independent experiments. Statistical significance was tested opportunely with an unpaired t-test; ** represents a significant difference for p < 0.01 between FA and FAcorr cells.
Mitochondrial Dynamic Is Unbalanced in FANC-A Cells
Mitochondrial biogenesis and dynamics play a pivotal role in the OxPhos efficiency main promoting the mitochondrial network organization maintenance [27]. Thus, the expression of proteins involved in mitochondrial biogenesis and fusion and fission processes was analyzed. Figure 4A) and fibroblasts ( Figure 4B) do not show significant differences in the expression of CLUH, an mRNA-binding protein involved in mitochondrial biogenesis, and OPA1 and MFN2, two proteins involved in mitochondrial fusion, compared to the controls. Conversely, expression of DRP1, a protein involved in mitochondrial fission, appears higher in FANC-A cells than in FAcorr cells. This alteration suggests imbalance in mitochondrial dynamics, where mitochondrial fission is promoted over fusion, leading to mitochondrial network disruption ( Figure 4C) and causing a consequent OxPhos efficiency decrease.
Mitochondrial Dynamic Is Unbalanced in FANC-A Cells
Mitochondrial biogenesis and dynamics play a pivotal role in the OxPhos efficiency main promoting the mitochondrial network organization maintenance [27]. Thus, the expression of proteins involved in mitochondrial biogenesis and fusion and fission processes was analyzed.
FANC-A lymphoblasts ( Figure 4A) and fibroblasts ( Figure 4B) do not show significant differences in the expression of CLUH, an mRNA-binding protein involved in mitochondrial biogenesis, and OPA1 and MFN2, two proteins involved in mitochondrial fusion, compared to the controls. Conversely, expression of DRP1, a protein involved in mitochondrial fission, appears higher in FANC-A cells than in FAcorr cells. This alteration suggests imbalance in mitochondrial dynamics, where mitochondrial fission is promoted over fusion, leading to mitochondrial network disruption ( Figure 4C) and causing a consequent OxPhos efficiency decrease. . For all the densitometry graphs, protein expression levels were normalized on the housekeeping signal, revealed on the same membrane. (C) Confocal imaging of FAcorr and FA fibroblasts stained with antibody against TOM20 (green) and DAPI (blue) to show the mitochondrial reticulum and nuclei, respectively. White scale bars correspond to 10 µm. The higher magnification insert corresponds to the area enclosed by the white square and is an example of mitochondrial network distribution. Fibroblasts were scored depending on the morphology of most of their mitochondrial population as elongated or intermediated/short. Results reported in the histogram on the right show that FA fibroblasts exhibited more often intermediated/short and less elongated mitochondria compared to FAcorr cells. Data are reported as mean ± SD. Histograms and WB signals are representative of at least three independent experiments. Statistical significance was tested with an unpaired t-test; **** represents a significant difference for p < 0.0001 between FA cells and FAcorr cells used as control.
Overexpressed DRP1 Reduction Partially Restores the OxPhos Activity
Since an unbalanced dynamic toward fission disrupts the mitochondrial network and lowers the OxPhos energy efficiency [28], FANC-A lymphoblasts and fibroblasts were treated for 24 h with P110, a specific DRP1 inhibitor [26]. Following this treatment, the overexpression of DRP1 is reduced in both treated FANC-A cell models compared to untreated cells, approaching the expression levels of the corrected cells ( Figure 5A for lymphoblasts and Figure 6A for fibroblasts). This reduction is associated with a recovery of mitochondrial network organization, as the balance between fusion and fission was partially restored ( Figure 6C). In each panel, data are reported as mean ± SD, and each graph is representative of at least three independent experiments. Statistical significance was tested appropriately with a one-way ANOVA or an unpaired t-test; *, **, ***, and **** represent a significant difference for p < 0.05, 0.01, 0.001, and 0.0001 between FA cells and the FAcorr cells used as control; °, °°, and °°°° represent a significant difference for p < 0.05, 0.01, and 0.0001 between FA cells untreated and treated with P110. P110 treatment also causes an increase in complexes I-III electron transport chain as well as oxygen consumption and ATP synthesis stimulated by pyruvate/malate ( Figure 5B for lymphoblasts and Figure 6B for fibroblasts), causing a recovery in OxPhos efficiency, probably due to both decreased UCP2 expression and improved mitochondrial reticulum organization. In addition, the energy metabolism amelioration results in fatty acid accumulation and lipid peroxidation reduction ( Figure 5B for lymphoblasts and Figure 6B for fibroblasts).
FANC-A Cells Display a Lower Protein Expression of Parkin and Beclin1
Since the literature reports that FA cells display dysfunctional mitophagy and autophagy [22][23][24][25], the expression of markers involved in these processes was evaluated. Data show that both in lymphoblasts ( Figure 7A) and fibroblasts ( Figure 7B), Parkin, an E3 ubiquitin ligase involved in the mitochondrion polyubiquitination [29], and Beclin1, an autophagy activator [30], are lower expressed than in FAcorr cell. Conversely, Pink1 and several autophagy effectors such as LC3, Atg7, Atg12, and Atg16L1 appear similar in FANC-A cells compared to the control. independent experiments. In each panel, data are reported as mean ± SD. Statistical significance was tested appropriately with a one-way ANOVA or an unpaired t-test; *, **, ***, and **** represent a significant difference for p < 0.05, 0.01, 0.001, and 0.0001 between FA cells and the FAcorr cells used as control; °, °°, and °°°° represent a significant difference for p < 0.05, 0.01, and 0.0001 between FA cells untreated and treated with P110.
FANC-A Cells Display a Lower Protein Expression of Parkin and Beclin1
Since the literature reports that FA cells display dysfunctional mitophagy and autophagy [22][23][24][25], the expression of markers involved in these processes was evaluated. Data show that both in lymphoblasts ( Figure 7A) and fibroblasts ( Figure 7B), Parkin, an E3 ubiquitin ligase involved in the mitochondrion polyubiquitination [29], and Beclin1, an autophagy activator [30], are lower expressed than in FAcorr cell. Conversely, Pink1 and several autophagy effectors such as LC3, Atg7, Atg12, and Atg16L1 appear similar in FANC-A cells compared to the control. in mitophagy and autophagy processes. WB signals of Pink1, Parkin, Beclin1, Atg7, Atg12, Atg16L1, and LC3. Actin signal has been used as a housekeeping signal for data normalization. The LC3 graph represents the ratio between the active (18 kDa) and inactive (20 kDa) form of the protein. Data are reported as mean ± SD. Histograms and WB signals are representative of at least three independent experiments. Statistical significance was tested with an unpaired t-test; ** represents a significant difference for p < 0.01 between FA cells and FAcorr cells used as control.
FANC-A mutated cells show damaged mitochondria associated with an impaired
OxPhos function due to altered electron transfer between complexes I and III, confirming data reported in the literature [7,10,12,18]. This metabolic alteration causes a reduction in the cellular energy status, lipid droplet accumulation, and an oxidative damage increment [12,16,31]. However, the mechanism that causes this alteration has not yet been elucidated. Therefore, considering that mitochondrial function depends on the OxPhos machinery integrity as well as the balance in the mitochondrial biogenesis and dynamics [32], the protein expression of some respiratory complexes subunits, mitochondrial fusion/fission, mitophagy, and autophagy markers were analyzed in lymphoblasts and fibroblasts mutated for FANC-A.
The expression evaluation of SDHB, a subunit of complex II, and beta subunit of ATP synthase, encoded by nuclear DNA [33,34], showed no significant differences between cells carrying the FANC-A mutation and the control cells. The same result was obtained by assessing the expression of ND1 and MTCO2, subunits of complex I and II, respectively, encoded by mitochondrial DNA [35]. Thus, it is possible to hypothesize that the metabolic defect does not depend directly on the altered expression of the OxPhos machinery, either at the nuclear or mitochondrial level. Nevertheless, FA cells show increased expression of UCP2, an uncoupling protein, which promotes the proton passage across the inner mitochondrial membrane, dissipating the proton gradient [36]. This increase could justify the uncoupling between ATP synthesis and oxygen consumption observed in FA cells, which causes oxidative stress increment and consequent DNA damage. UCP2 also regulates mitochondrial calcium uptake [37], and its overexpression could play a role in dysregulated calcium homeostasis observed in FA cells [38].
The OxPhos efficiency depends on the balance between fusion and fission processes [39], which regulate mitochondrial dynamics [40]. Fusion activation induces mitochondrial elongation and increased development of the mitochondrial network, promoting the OxPhos functionality and the interaction between mitochondria and other cellular organelles, including the endoplasmic reticulum [41]. Conversely, fission determines the breakdown of the mitochondrial network, a necessary condition during cell division [41]. However, isolated mitochondria appear much less efficient in energy production than those organized in a reticulum [42]. Based on the WB analysis shown in Figure 3, FA cells show similar expression of CLUH, an RNA-binding protein involved in mitochondria biogenesis, and OPA1 and MFN2, two proteins involved in fusion, but higher levels of DRP1, a GTPase involved in fission. Consequently, cells mutated for FANC-A display altered mitochondrial dynamics biased toward mitochondrial reticulum disaggregation, as shown by confocal microscopy images. On the other hand, literature and data reported in Figure 1 report that mitochondria in FA cells appear smaller and swollen with poorly defined ridges [10,[13][14][15][16], all characteristics attributable to increased mitochondrial fission. On the contrary, the Parkin and Beclin1 expression appears lower than in FAcorr cells, data in line with the alteration in mitophagy and autophagy reported in the literature, and that may justify the damaged mitochondrial accumulation [22][23][24][25].
The pivotal role of DRP1 overexpression in the metabolic dysfunction in FA cells is confirmed by the partial restoration of the mitochondrial dynamic and functionality observed after treatment with P110, a specific DRP1 inhibitor [26]. In detail, cells treated with P110 show a DRP1 expression like the control and a better-organized mitochondrial network. In addition, treatment with P110 improves mitochondrial function and efficiency, as the electron transport between complexes I and III is partially restored, resulting in an amelioration of OCR and ATP synthesis through the pathway led by complex I. In addition, treatment with P110 also reduces the expression of UCP2, improving energy efficiency. The aerobic metabolism improvement is associated with a lower accumulation of fatty acids and subsequent lipid peroxidation, improving the cell's overall redox state. On the other hand, Shyamsunder et al. have already suggested that DRP1 is involved in autophagy and mitophagy alteration in FA cells [22].
However, the mechanisms linking altered mitochondrial dynamics to FA gene mutations are still unclear. In this regard, Gueiderikh et al. recently demonstrated the involvement of FA protein in transcription as involved in ribosome biogenesis and nucleolar maintenance [43], which could explain the altered expression of proteins belonging to different pathways. Furthermore, an altered expression profile of miRNAs could also be involved [44].
Thus, although the link between the FA mutation and altered mitochondrial dynamics remains to be elucidated, this work suggests that the mitochondrial dynamic modulation plays a pivotal role in the pathogenesis of FA and that its restoration could be considered a therapeutic target.
Materials
All chemical compounds were of the highest chemical grade (i.e., Tris-HCl, KCl, EGTA, MgCl 2 , sulfuric acid, trichloroacetic acid, and HCl) and were purchased from Sigma-Aldrich, St. Louis, MO, USA.
Cellular Models and Treatment
Three different FANC-A lymphoblast cell lines (Lympho FA) and three different FANC-A primary fibroblast cell lines (Fibro FA) derived from four patients who carried out different mutations of FANC-A gene were obtained from the "Cell Line and DNA Biobank from Patients affected by Genetic Diseases" (G. Gaslini Institute)-Telethon Genetic Biobank Network (Project No. GTB07001) [10]. In addition, isogenic FA-corr cell lines, generated by the same FANC-A lymphoblast and fibroblast cell lines corrected with S11FAIN retrovirus (Lympho FAcorr and Fibro FAcorr), were employed as a control to maintain the characteristics of the FA cell lines except for the FANC-A gene mutation [10].
F o F 1 ATP-Synthase Activity Assay
The F o F 1 ATP-synthase activity was evaluated by incubating 10 5
P/O Ratio
The P/O value was calculated as the ratio between the aerobic synthesized ATP and the consumed oxygen and represents a measure of OxPhos efficiency. Efficient mitochondria have a P/O value of around 2.5 or 1.5 when stimulated with pyruvate and malate or succinate, respectively, as respiring substrates. A P/O ratio lower than 2.5 for pyruvate and malate or lower than 1.5 for malate indicates that oxygen is not completely used for energy production but contributes to reactive oxygen species (ROS) production [45].
Electron Microscopy Analysis
FA and FAcorr lymphoblast and fibroblast pellets were fixed with 2.5% glutaraldehyde 0.1 M cacodylate buffer pH 7.6 for 1 h at room temperature. After post-fixation with 1% OsO4 in cacodylate buffer for 1 h, pellets were dehydrated in an ethanol series and embedded in Epon resin. Ultrathin sections stained with uranyl-acetate and lead citrate were observed with a Jeol Jem-1011 transmission electron microscope [14].
Cell Homogenate Preparation
Fibroblast cell lines, which grow in adhesion, were detached from the culture flask by trypsinization for 5 min at 37 • C, after the culture medium removal and a wash in PBS (#14190250, ThermoFisher Scientifics, Waltham, MA, USA) to remove any traces of FBS. Then, trypsin (Trypsin-EDTA 1X in PBS, #ECB3052D, Euroclone, Milano, Italy) was blocked with a fresh culture medium, and the cells were collected. Lymphoblastoid cell lines growing in suspension were only collected. All cells were then centrifuged at 1000× g for 5 min to remove the growth medium. Next, pellets were washed twice in PBS and centrifuged again. All pellets were resuspended in an appropriate volume of Milli-Q water and sonicated (Microson XL Model DU-2000, Misonix Inc., Farmingdale, NY, USA) twice for 10 seconds each with a 30-second interval in between and in ice to prevent heating. Total protein content was evaluated according to the Bradford method [46].
Electron Transport between Complexes I and III Evaluation
The electron transfer between respiratory complex I and III was analyzed spectrophotometrically, following the reduction of cytochrome c, at 550 nm. For each assay, 50 µg of total protein was used. The reaction mix contained 100 mM Tris-HCl pH 7.4 and 0.03% of oxidized cytochrome c (#C2867, Sigma-Aldrich, St. Louis, MO, USA). The assay started with the addition of 0.7 mM NADH. If the electron transport between Complex I and Complex III is conserved the electrons pass from NADH to Complex I, then to Complex III via coenzyme Q, and finally to cytochrome c [12].
Lipid Content Evaluation
The lipid content was evaluated by the Sulfo-Phospho-Vanillin assay. Briefly, samples were incubated with 95% sulfuric acid at 95 • C for 20 min, quickly cooled, and evaluated at 535 nm. Afterward, a solution of 0.2 mg/mL vanillin (#V1104, Sigma-Aldrich, St. Louis, MO, USA) in 17% aqueous phosphoric acid was added to the samples, incubated for 10 min in the dark, and reevaluated at 535 nm. A mix of triglycerides (#17810, Sigma-Aldrich, St. Louis, MO, USA) was used to obtain a standard curve [12].
Malondialdehyde Level Evaluation
The malondialdehyde (MDA) concentration was evaluated by the thiobarbituric acid reactive substances (TBARS) assay. This test is based on the reaction of MDA, a breakdown product of lipid peroxides, with thiobarbituric acid (#T5500, Sigma-Aldrich, St. Louis, MO, USA). The TBARS solution contained 15% trichloroacetic acid in 0.25 N HCl and 26 mM thiobarbituric acid. To evaluate the basal concentration of MDA, 600 µL of TBARS solution was added to 50 µg of total protein dissolved in 300 µL of Milli-Q water. The mix was incubated for 40 min at 95 • C. After the sample was centrifuged at 20,000× g for 2 min, the supernatant was analyzed spectrophotometrically at 532 nm [12].
Confocal Microscopy Analysis
Cells were cultured in chamber-slides for 24 h in the absence or presence of P110. After PBS washes, cells were fixed with 0.3% paraformaldehyde (#P6148, Sigma-Aldrich, St. Louis, MO, USA) and permeabilized with 0.1% triton (#X100, Sigma-Aldrich, St. Louis, MO, USA). Cells were incubated overnight at 4 • C with the antibody against TOM20 (#42406S, Cell Signaling Technology, Danvers, MA, USA). After PBS washes, cells were incubated for 1 h at 25 • C with the Alexa-546-conjugated anti-rabbit antiserum (#A11010, Invitrogen, Waltham, MA, USA) as a secondary antibody. After PBS wash, chamber slides were mounted in mowiol. Immunofluorescence confocal laser scanner microscopy (CLSM) imaging was performed using a laser scanning spectral confocal microscope TCS SP2 AOBS (Leica, Wetzlar, Germany), equipped with Argon ion, He-Ne 543 nm, and He-Ne 633 nm lasers. Images were acquired through a HCX PL APO CS 40×/1.25 oil UV objective and processed with Leica. Images were acquired as single transcellular optical sections.
To evaluate the mitochondrial network shape, fibroblasts were scored depending on the morphology of most of their mitochondrial population as elongated or intermediated/short following the method described in [47].
Statistical Analysis
Data were analyzed appropriately using unpaired t-test or one-way ANOVA, using Prism 8 Software. Data are expressed as mean ± standard deviation (SD) and are representative of at least three independent experiments. An error with a probability of p < 0.05 was considered significant. | 5,812.8 | 2023-03-31T00:00:00.000 | [
"Biology"
] |
Connecting cooperative transport by ants with the physics of self-propelled particles
Paratrechina longicornis ants are known for their ability to cooperatively transport large food items. Previous studies have focused on the behavioral rules of individual ants and explained the efficient coordination using the coupled-carrier model. In contrast to this microscopic description, we instead treat the transported object as a single self-propelled particle characterized by its velocity magnitude and angle. We experimentally observe P. longicornis ants cooperatively transporting loads of varying radii. By analyzing the statistical features of the load's movement, we show that its salient properties are well captured by a set of Langevin equations describing a self-propelled particle. We relate the parameters of our macroscopic model to microscopic properties of the system. While the autocorrelation time of the velocity direction increases with group size, the autocorrelation time of the speed has a maximum at an intermediate group size. This corresponds to the critical slowdown close to the phase transition identified in the coupled-carrier model. Our findings illustrate that a self-propelled particle model can effectively characterize a system of interacting individuals.
I. INTRODUCTION
Cooperative transport by ants is the concerted effort to carry large food items toward their nest.This behavior is observed in a variety of ant species [1][2][3].Longhorn crazy ants (Paratrechina longicornis) display some of the most impressive cooperative transport abilities.These two-and-a-half millimeter ants gather in groups consisting of hundreds of individuals to haul loads that are heavy and large, exceeding 10,000 times their own weight and more than one hundred times their body length [4].Moreover, these ants show remarkable collective navigation abilities as they transport large loads across complex obstacles [3][4][5] and through disordered environments [6] to deliver them to their nest quickly.To achieve this efficiency and robustness, the ants rely on the following cooperative behavior: When a single ant finds a food item which is too large for her to transport by herself to her nest, she lays a pheromone trail in order to recruit ants from the nest to the site [7].Once enough ants are gathered, they cooperate to transport the item using their mandibles, assuming various roles during transport: Newly attached ants function as temporary leaders as they are informed of the nest location.They persist in pulling the load towards the nest irrespective of the current direction of motion for approximately ten seconds [5,8,9].Ants that have been connected to the load for more than ten seconds sense the current direction of motion and align their efforts accordingly.They pull the load if they are attached at the leading edge and lift the load to reduce friction with the floor if they are attached to the trailing edge [10].There is a constant turnover of ants, as carriers sometimes let go of the load, and unattached ants take their place.These relatively simple behavioral rules are the basis of the agent-based coupled carriers model [4,5,[8][9][10][11].Numerical simulations within this model's framework revealed a critical finite-size phase transition between uncoordinated (individualistic) and coordinated ant behavior that sets an ideal group size which maximizes the collective response to an informed ant [4,[10][11][12].While this model successfully replicates load trajectories for different group sizes, it does not provide a coarse-grained, analytic understanding of load trajectories.
In this paper, we show how a self-propelled particle model [13][14][15][16] end, we experimentally observed trajectories of spatially unconstrained rings of varying radii which are carried by P. longicornis ants (cf.Fig. 1) and developed a statistical description of these trajectories.We show that a model originally developed to describe cell chemotaxis [17] can be employed to describe ants that are transiently tethered together by a load as one large self-propelled particle.To our knowledge, cooperative transport presents the first example of collective motion in an animal group which can be characterized by the physics of a single effective agent [18,19].
In the following, we first present the experiment and discuss the statistical properties of the trajectory data.Then, we introduce a four-parameter description of the data in terms of deterministic and random accelerations of the velocity's angle and magnitude.Finally, we analyze the resulting model parameters as a function of the ring size which correlates with the number of carrying ants linking the different scaling behaviors to the phase transition demonstrated in the coupled-carriers model.
A. Experimental setup
In order to compare the trajectories of cooperative transport between two given points, an enclosed arena (78 cm × 34 cm) was placed ∼ 1 m from a nest of Paratrechina longicornis ants in the field as illustrated in Fig. 1a.The arena had a ∼ 1 cm wide single opening directed towards the nest.A load was repeatedly placed at a predetermined location on the far side of the arena.The initial load position, the arena entrance, and the nest entrance were all aligned (cf.supplementary material [20]).
Silicon rings of varying radii were incubated in cat food overnight in order to make them as attractive as food to the ants.A single ring of radius r = 0.15 cm, 0.4 cm, 1 cm or 2 cm was placed inside the arena, ∼ 75 cm away from the arena entrance.After ants discovered the ring, they then cooperatively transported it towards the nest (Fig. 1b,c).This constitutes a single trajectory of the load.Once the ants reached the arena opening, the same ring was returned to the initial location, and cooperative transport was immediately resumed.This was repeated for N traj = 43 − 55 times for each of the four load radii.This process was recorded and the position of the center of the ring was extracted, as displayed in Fig. 2a-d.In the following, we will discuss the trajectory features of different ring radii.We emphasize here that radius could be used interchangeably with the number of ants participating in the cooperative transport N because the latter increases linearly with r [20].Each set of trajectories of a given ring radius has a perpendicular spread to the line connecting the initial position of the load and the arena exit ∆y as previously shown in [11].Some trajectories contain turnarounds (i.e.loops).The movement is reminiscent of a biased Brownian walk towards the nest.The source of the movement's bias is the ants' motivation to quickly carry the object to the arena exit and subsequently to their nest.The transporting group is directed by a pheromone trail laid by non-carrying ants [7] and by newly arriving ants who influence the group by persistently tugging the object in the nest direction for ∼ 10 s [8,10].
B. Velocity distributions
Plotting the velocity distributions for sets of trajectories of varying radii has proven most indicative of the motion's nature (Fig. 2e-h).The velocity magnitude v adheres to a preferred value v c and exhibits concentric features.The angle of the velocity vector θ is strongly biased towards the nest direction (θ 0 = 0).
For smaller radii, v is not as tightly bound to v c and the angular spread ∆θ around θ 0 is larger (Fig. 2e-f).These trajectories are more erratic and display a wider perpendicular spread as shown in Fig. 2a, b.For larger shapes, v adheres more closely to v c and the uni-directionality towards the nest is more pronounced (Fig. 2g-h).This corresponds to the trajectories' larger angular persistence and more narrow perpendicular spread as shown in Fig. 2c, d.These observations are in agreement with previous empirical results [10].In the following, we will present a self-propelled particle model that captures these features.
III. MODEL OF ANTS AS A SINGLE SELF-PROPELLED PARTICLE A. Macroscopic trajectory features and model choice
While the stochastic motion of the rings crosses over into regular diffusion at late times, the velocity's angle θ and magnitude v exhibit persistence for short times below characteristic timescales τ θ and τ v , respectively.Persistence implies that v and θ are not completely randomized between time steps as would be the case in regular diffusion.Instead, the object partially preserves its previous velocity angle and magnitude.Furthermore, the velocity components in x and y direction are not independently randomized.Instead, the velocity adheres to a preferred velocity magnitude v c .One of the simplest implementations of this notion is to assume that v itself performs a Brownian motion [17].In such a model, the self-propulsion is subject to friction, v/v c − 1, which is dimensionless and depends on the difference between the desired velocity magnitude v c and the instantaneous magnitude v.For velocities v < v c , this friction becomes negative, i.e., the self-propulsion works to increase v, and vice versa.Thus, the steady-state distribution is centered around v c > 0. The resulting self-propelled motion resembles an Ornstein-Uhlenbeck process [15,21] in velocity space.This has been discussed by Schienbein and Gruler [17] and others [22,23], For every size, a trajectory is partially highlighted, and a schematic ring illustrates the transported object.We have excluded the last 10 cm (x > 68 cm) from our statistical analysis because strong deviations from the central line (y ≫ 0) cause the preferred angle pointing towards the exit to strongly deviate from θ0 = 0 close to the exit.The two-dimensional velocity heat maps for the experimental trails are shown in panels e) to h).The velocity is biased in the target direction θ0 = 0 • and centered around a preferred magnitude vc with width αvvc.Most notably, the velocities vx and vy along the principal axes are not independent.With increasing load radius, the histograms become more concentrated around the mean magnitude and the target direction ⃗ ex, which is also illustrated in Fig. 3a-b.Sections of the trails in which the load did move for more than 10 s because of momentary dropping of the load (i.e., no ants were attached to the ring) where excluded when plotting the velocity histograms.Simulated trails and velocity heat maps are shown in panels i) to l) and m) to p), respectively.The simulations are based on Eq. ( 1)-( 3) with parameters taken from fitting the experimental velocity histograms to the theoretical distributions in Eqs. ( 4) -( 5) and shown in Fig. 3c,d.
stationary distributions auto-correlation best-fit parameters v and preferred velocity magnitude vc for different load radii, resulting from the fitting of the magnitude.1/α 2 v quantifies the strength of the returning force towards vc and increases with r.The preferred velocity magnitude has a maximum for the ring with r = 1 cm and drops significantly for the ring with radius r = 2 cm.Auto-correlation of the time-series of e) cos(θ(t)) and f) velocity magnitude v(t).The solid line represents the least square fit of the functions gvv(t) = ⟨v(0)v(t)⟩ = v 2 c + αvv 2 c e −t/τv and g θθ (t) ∼ e −t/τ θ .The auto-correlation of the velocity magnitude and angle decay over time with decay constants τv and τθ , respectively, which are shown in Fig. 4. Best-fit parameters g) vc and h) αvvc extracted by fitting velocity magnitude distribution Pv (red) and the auto-correlation gvv(t) (black).
and originally was used to describe the migration of single cells.One of the hallmarks of such a motion is that the independent degrees of freedom are the magnitude and the angle of the velocity vector.Based on our experimental findings, we thus describe the trajectories as a self-propelled motion with a velocity kernel.To this end, consider the Langevin equations for Brownian motion of the instantaneous velocity ⃗ v [17]: This set of equations describes the time evolution of the load's position ⃗ r and its velocity ⃗ v which is composed of its magnitude v = v 2 x + v 2 y and direction given by angle θ, defined by tan θ = v y /v x .The normalized noise terms are delta-correlated, The motion is characterized by v and θ that relax towards their preferred values, v c and θ 0 .The relaxation of v towards v c corresponds to the aforementioned dimensionless friction resulting in self-propulsion.Simultaneously, the motion is slowly randomized by the Gaussian stochastic variables η v and η θ [24].The spread of v around v c due to randomization is quantified by α v v c .The timescale τ v is the time it takes for velocities v ̸ = v c to decay towards v c while τ θ is the time it takes for angles θ ̸ = θ 0 to decay towards θ 0 .The long-range attraction to the nest is encoded by a preferred velocity direction θ 0 , with strength α θ .A subtlety of the model presented in Eqs.(1-3) is that v may formally acquire negative values.However, in the experimental data, only very few points are close to v = 0. We can, therefore, safely disregard such events in the following.It is also possible -if somewhat cumbersome -to include a local potential per mass unit V (⃗ r), which creates an additional, spatially dependent acceleration − ⃗ ∇V (⃗ r) acting on ⃗ v.This could be used to capture the effect of potential pheromone trails guiding the load away from the preferred direction of motion θ 0 .Such are not present in our case and can safely therefore be disregarded for the following analysis [20].
B. Possible turning mechanisms
The model contains two velocities v c , α v v c , and two timescales τ v , τ θ .The dimensionless parameter α θ measures the strength of the global bias, i.e. how closely the target direction θ 0 is adhered to.The motion becomes increasingly unidirectional the larger α θ is, while for α θ = 0, the motion is isotropic.Similarly, the dimensionless ratio α v encodes how strongly the velocity magnitude is randomized, causing v to deviate from v c .If α v > 1, the velocity is randomized so quickly that θ can flip in a short time merely due to the large magnitude fluctuations.In contrast, if α v < 1, v adheres close enough to v c , such that the velocity does not flip its angle.Then turnaround (i.e., a reversal of the velocity vector) only happens by slowly rotating θ while moving at nonzero velocity.For this latter case, one can distinguish whether the trajectory turns around more quickly due to the randomization of the angle or due to the influence of the bias: Consider a point in the trajectory where the object moves opposite to the global bias.Following the factors preceding the restoring and randomizing components in Eq. ( 3), turning around due to angular fluctuations happens on a time scale of T r = τ θ , while the turnaround time towards the nest due to the bias is T b = τ θ /α θ .By observing a turnaround, we cannot distinguish whether it occurred due to randomization of the angle or the global bias.However, from experiments, we can extract an effective angular correlation time τθ = τθ (τ θ , α θ ), which takes into account both turning mechanisms and can be analytically approximated [20].Therefore, given that we determined α θ and τθ from a set of experimental trajectories, we are able to approximate τ θ .
IV. ESTIMATION OF MODEL PARAMETERS
All model parameters are accessible by fitting the experimental data: α v and α θ can be extracted from the steady state probability distribution in velocity magnitude and angle.The timescales τ v and τ θ follow from their respective autocorrelation functions.
A. Parameters from velocity distributions
The distributions of the measured velocity components in angle and magnitude are shown in Fig. 3a and 3b.They are well described by the steady-state solutions of Eqs.(2-3) for θ and v, where are normalization constants, and I 0 is the modified Bessel function of the first kind.The bias strength α θ was determined by fitting P θ to the angular distributions (Fig. 3a), α v and v c were determined by fitting P v to the velocity magnitude distributions (Fig. 3b).After normalizing velocity ⃗ v by v c , its steady state properties only depend on α v and α θ .The best-fit values for these latter two parameters are shown in Fig. 3c, d and are a measure for the strength of the returning force towards v c and θ 0 .We find that both 1/α 2 v and α θ increase with increasing ring radius, which is consistent with the trajectories' stronger adherence to the target values (v c , θ 0 ) with increasing load radius.The slight increase in P θ at v ≈ 0 for r = 0.15 cm in our experimental data presented in Fig. 3b arises from temporary dropping of the load, which the fitted distribution does not capture.
B. Parameters from dynamical trajectory features
To experimentally determine the dynamical parameters τ v and τ θ , we turn towards the autocorrelation functions.The correlation function the velocity magnitude is known exactly [17], and reads g vv (t) = ⟨v( 0 ) 2 e −t/τv .Our experimental results and the least square fit of g vv are shown in Fig. 3f.The optimal fit parameters v c and α v v c take on similar values to the ones found from the stationary velocity distributions as shown in Fig. 3g, h.In both cases, v c takes on a maximal value at ring radius r = 1 cm, and α v v c decreases monotonically with ring radius.The steady-state and the dynamical properties are the result of the same dynamical relations.Since the Langevin Eq. ( 3) for the angle is nonlinear, the time-dependent cosine-correlation function g θθ (t) = ⟨cos θ(0) cos θ(t)⟩ has to be calculated perturbatively in α θ [20].We find that the exponential decay of the correlations, g θθ (t) ∼ e −t/τ θ , contains the effective decay rate τ −1 Performing a fit for the autocorrelation functions at short times [20] yields τ v and τ θ , which are shown in Fig. 4a, d, respectively.For all radii, T r > T b , from which we conclude that an object moving away from the nest turns around more quickly due to the global bias, not the randomization of the angle.From the experimentally determined turning time τθ and preferred velocity magnitude v c , one can calculate the turning radius R = v c τθ .We find that R = 0.7 − 1.7 cm, which is in line with previously reported values for the turning radius [11].
C. Simulations confirm proper model choice
Based on the Langevin equations (1-3) and best-fit parameters (Fig. 3c, d), we numerically simulated 10 trajectories per ring radius.We use ∆t = 0.1 s and simulated the trajectories until they reached x > 78 cm, which took approximately 10.000 time steps per trajectory.These trajectories and their respective velocity histograms are shown in Fig. 2i-p.The simulated and experimental trajectories show similar properties (Fig. 2): The trajectories of smaller ring radii are more erratic, while those of larger ring radii are more smooth.Also, we find strong agreement between simulated and experimental velocity heat maps (Fig. 2m-p).This further validates our selfpropelled particle model's accuracy in capturing the load behavior and confirms the correctness of our chosen model parameters.
We find that the spread of the experimental trails ∆y remains uniform with growing x, while for simulated trajectories, the ∆y grows.This is due to the fact that in experiments, the load is biased towards the exit door of the arena, not merely θ 0 .Incorporating this spatial dependency into Eq.( 3) leaves the velocity histograms mainly unaffected because during most of the trail the arena exit lies at an angle ≈ θ 0 .Consequently, we choose not to include this spacial dependency in our model.We now investigate the dependence of the model parameters on load radius, with the goal of finding a meaningful connection between the dynamics of the effective selfpropelled particle and individual ant behavior.To this end, given both steady state parameters (α −2 v , α θ ) and the dynamical parameters (τ v , τ θ ) for all four load radii, we analyze each of the four force terms appearing in Eqs.(2)(3) separately: The deterministic relaxation towards (v c , θ 0 ), encoded by the friction rates (1/τ v , α θ /τ θ ), as well as the rates of randomization (α 2 v /τ v , 1/τ θ ).The resulting characteristic time scales as a function of r are shown in Fig. 4. The angular rates (τ θ , α θ /τ θ ) increase monotonically with increasing ring radius (Fig. 4b, d), which is consistent with previous research showing a decrease in trajectory curvature with increasing radius [10].On the other hand, τ v and τ v /α 2 v has a maximum at an intermediate ring radius of r = 0.4 cm (Fig. 4a, c).Previously, using the coupled-carriers model a phase transition at this load radius was identified [10].It is known that close to a phase transition, the recovery rate after small perturbations slows down, leading to an increase in the temporal autocorrelation of the magnetization, a phenomenon known as 'critical slowing down' [25].In our experimental system, the load's velocity magnitude is determined by the internal alignment of the ants, which can be thought of as the system's magnetization.Therefore, the increase in τ θ aligns with the previously identified critical point.
B. Relating model parameters to our system of carrying ants
Next, we rationalize a simple scaling ansatz that leads to a maximal persistence time τ v for intermediate group sizes.
We assume that a small number of attached ants struggle to maintain the average velocity magnitude when repositioning or changing their state from lifting to pulling the load [10].Therefore, the auto-correlation time τ v is diminished.On the other hand, for a large number of ants, the target velocity v c is quickly reattained by averaging if a disturbance made it deviate.In between these two limits, for carrier numbers in the range of 10-15, the ants keep a once acquired velocity for a maximal time τ 0 .Therefore, we propose the following scaling relation: Here, N 0 is the number of ants at which the effective friction coefficient is minimal, while ∆ is an estimate of the fluctuation in the number of attached ants.While the ansatz chosen here is probably not unique, the location and width of the maximum in τ v and the typical timescales in the experiment strongly insinuate that comparable values would be recovered in more elaborate scenarios.
In contrast to the scaling of τ v , the angular relaxation rate τ θ increases essentially linearly with r.This makes sense if the travel direction θ is collectively negotiated [11]: Assuming that the retention time for an individual ant is τ 0 , the collective memory about the target direction is averaged out (i.e., deleted) only after a much longer time τ θ ∼ N τ 0 [9]: Therefore, in contrast to mechanically enforced compromises (tug-of-war), the travel direction -as a decision-based group effort -is not subject to the diminishing relative contribution of the individual.
Note that τ θ , compared to τ v is larger by a factor of π due to the different normalization between v and θ.
Regarding the normalized standard deviation in the velocity α v , we reiterate that the average velocity magnitude is not negotiated but emerges by averaging.Thus the ensemble average is expected to scale with a standard deviation α v ∼ N −1/2 according to the law of large numbers.It is known experimentally that the target velocity is enforced by the pulling ants, which constitute around half of the attached ants [10].Using this information, the scaling form Eq. ( 6) is constructed such as to capture both the small-N limit with α v (N → 0) = ∆/2 as well as the large-N limit where α Finally, we point out that the acceleration bias τ θ /α θ (Fig. 4b), does not lend itself to such a simple analysis.This is not unexpected because the strength of the bias α θ depends on the arrival rate of new, informed ants.Therefore, the timescale τ θ /α θ is subject to environmental conditions which are not experimentally controllable in the present experiment which prevents us from to drawing firm conclusions.The simultaneous fit yields the following parameter values, where error propagation was taken into account: N 0 = 10.06 ± 1.67 (9) ∆ = 9.95 ± 0.32 (10) τ 0 = 1.53 s ± 0.04 s (11) The goodness of fit qualifiers are χ 2 = 38.81,objective value = 19.40 and r 2 = 0.980.The fits are shown in Fig. 4a, c and d.Similar properties have been reported in Ref. [10] within numerical calculations within the coupled-carriers model.Namely, the timescale for reorientation of an ant found to be τ lit 0 ∼ 1.4 s, and the most cooperative and responsive group sizes were in the range of N lit 0 ∼ 10 ants [11] corresponding to the aforementioned phase transition.
VI. CONCLUSIONS AND OUTLOOK
We have modeled the movement of a load being cooperatively transported by ants as a biased self-propelled particle subject to velocity diffusion.We also related the presented statistical model to other microscopic features of the previously established coupled-carriers model using a simple scaling ansatz of the model parameters.We analyzed the key dynamical and steady-state properties of ant cooperative transport and found that the autocorrelation time of the velocity magnitude reaches a maximum for intermediate group sizes.We have found that this model can successfully reveal the same phase transition previously identified using the agent-based coupled-carriers model.However, the statistical approach presented in this study does not rely on specific knowledge about the behavior of individual ants.Instead, it uses a model describing a single self-propelled particle to reconstruct group behavior while making no assumptions about the individual agents.Therefore, we have shown that these types of models can provide a valuable tool for describing and understanding collective behavior in which microscopic details concerning the agents' individual and cooperative behavior are unknown.Furthermore, our findings open up the exciting possibility of using Ising-type models similar to the coupled-carriers model to analyze the behavior of individual, self-propelled agents.
Figure 1 .
Figure 1.Cooperative transport trajectories.a) Experimental trajectories (gray) of transported silicon ring with radius r = 0.4 cm.One trajectory is highlighted in black with an illustration of ants carrying the object.Photographs of groups of ants cooperatively transporting rings of radii r = 0.4 cm and 1.0 cm, respectively.The red dotted lines illustrate the trails are along which the centers of the rings move towards the nest.
Figure 2 .
Figure2.Trails of the center of rings transported by ants with radii r = 0.15, 0.4, 1 and 2 cm in panels a) to d), respectively.The outline of the figures represents the arena boundary by which the ring's movement is confined.The opening at (x, y) = (78 cm, 0 cm) represents the exit of the arena.For every size, a trajectory is partially highlighted, and a schematic ring illustrates the transported object.We have excluded the last 10 cm (x > 68 cm) from our statistical analysis because strong deviations from the central line (y ≫ 0) cause the preferred angle pointing towards the exit to strongly deviate from θ0 = 0 close to the exit.The two-dimensional velocity heat maps for the experimental trails are shown in panels e) to h).The velocity is biased in the target direction θ0 = 0 • and centered around a preferred magnitude vc with width αvvc.Most notably, the velocities vx and vy along the principal axes are not independent.With increasing load radius, the histograms become more concentrated around the mean magnitude and the target direction ⃗ ex, which is also illustrated in Fig.3a-b.Sections of the trails in which the load did move for more than 10 s because of momentary dropping of the load (i.e., no ants were attached to the ring) where excluded when plotting the velocity histograms.Simulated trails and velocity heat maps are shown in panels i) to l) and m) to p), respectively.The simulations are based on Eq. (1)-(3) with parameters taken from fitting the experimental velocity histograms to the theoretical distributions in Eqs.(4) -(5) and shown in Fig.3c,d.
Figure 3 .
Figure 3. Stationary a) angular distributions and b) velocity magnitude distributions for different load radii.The dots correspond to experimental results.The solid lines represent the least square fit of the theoretical distributions, given in Eqs. 4 and 5. c) Best-fit parameters α θ for different radii resulting from the fitting of the angular distributions quantifying the strength of the returning force towards θ0.d) Best-fit parameters 1/α 2v and preferred velocity magnitude vc for different load radii, resulting from the fitting of the magnitude.1/α 2 v quantifies the strength of the returning force towards vc and increases with r.The preferred velocity magnitude has a maximum for the ring with r = 1 cm and drops significantly for the ring with radius r = 2 cm.Auto-correlation of the time-series of e) cos(θ(t)) and f) velocity magnitude v(t).The solid line represents the least square fit of the functions gvv(t) = ⟨v(0)v(t)⟩ = v 2 c + αvv 2 c e −t/τv and g θθ (t) ∼ e −t/τ θ .The auto-correlation of the velocity magnitude and angle decay over time with decay constants τv and τθ , respectively, which are shown in Fig.4.Best-fit parameters g) vc and h) αvvc extracted by fitting velocity magnitude distribution Pv (red) and the auto-correlation gvv(t) (black).
Figure 4 .
Figure 4. Optimal fit parameters taken from angular and velocity magnitude distributions and from the autocorrelation as a function of r.These give time scales of the corrective and randomizing force on the (a-b) velocity magnitude and (c-d) angle.The black curves represent the best fit of the phenomenological scaling ansatz with parameters ∆, N0, and τ0.No scaling relation was suggested for α θ ; therefore, c) contains no curve.On the left and right are depictions of the respective effect of each force term on the respective distributions. | 6,924.4 | 2023-01-24T00:00:00.000 | [
"Physics",
"Biology"
] |
Optimizing Laguerre expansion based deconvolution methods for analysing biexponential fluorescence
: Fast deconvolution is an essential step to calibrate instrument responses in big fluorescence lifetime imaging microscopy (FLIM) image analysis. This paper examined a computationally effective least squares deconvolution method based on Laguerre expansion (LSD-LE), recently developed for clinical diagnosis applications, and proposed new criteria for selecting Laguerre basis functions (LBFs) without considering the mutual orthonormalities between LBFs. Compared with the previously reported LSD-LE, the improved LSD-LE allows to use a higher laser repetition rate, reducing the acquisition time per measurement. Moreover, we extended it, for the first time, to analyze bi-exponential fluorescence decays for more general FLIM-FRET applications. The proposed method was tested on both synthesized bi-exponential and realistic FLIM data for studying the endocytosis of gold nanorods in Hek293 cells. Compared with the previously reported constrained LSD-LE, it shows promising results.
Introduction
Fluorescence lifetime image microscopy (FLIM) has been showing great potential for visualizing biomolecules or physiological parameters (such as Ca 2+ , O 2 , pH, temperature) in fixed and living cells in biology and medicine [1][2][3][4][5][6][7][8]. It can localize specific fluorophores as usual fluorescence intensity imaging, but more importantly it can probe local environments around the fluorophores. It can be implemented in scanning confocal or multi-photon microscopes, or in wide-field microscopes and endoscopes [4][5][6][7]. In this paper we will focus on time-domain time-correlated single-photon counting (TCSPC) or time-gated FLIM techniques, where the measured fluorescence response from biological tissues to a laser excited pulse is a convolution of the intrinsic fluorescence impulse response function (fIRF) (emitted from tissues) with the instrument impulse response function (iIRF) (contributed by the distorted light pulse, detection mechanisms, instrument electronics and other delay components [9]).
In FLIM experiments using Förster resonance energy transfer (FRET) techniques to study cellular protein interaction networks [10-12], fIRF can be modelled as a sum of two exponentials (or more than two if fluorophores have complicated fluorescence decay profiles). Accurately measuring lifetime components is very crucial for detecting protein-protein interactions and protein conformational changes inside living cells [13][14][15][16][17]. In this paper we will examine the applicability of the proposed approach for bi-exponential fluorescence decays, especially for those having a fast (sub-nanosecond) and a slow (nanosecond) components, common situations encountered when we use FLIM-FRET techniques to study the endocytosis of gold nanorods (GNR) in living cells. The study results will be valuable for exploring research in drug delivery or disease treatment.
Deconvolution techniques used to recover the fIRF from the fluorescence histograms measured by TCSPC systems are very important and typical in FLIM analysis. Although tailfitting is widely used for fast analysis, scientists are still keen to know exact estimations. Numerous deconvolution techniques have been proposed [18][19][20][21], and among them least squares deconvolution based on Laguerre expansion (LSD-LE) has been proven effective showing superior sensitivity in disease detection [22][23][24][25][26]. There are, however, two mysterious factors to be chosen in order to use LSD-LE properly in diagnosis or parameter identification applications [27-31]. These LSD-LE methods using ordinary least squares analysis (OLSD-LE) can easily cause 'overfitting' (i.e. fitting the noise instead of the true signals), a serious problem especially when the acquisition has to be fast for diagnosis applications where the detected photon signal is contaminated by noise. Liu et al. presented a constrained LSD-LE (CLSD-LE) to avoid this problem, and they concluded that the chosen Laguerre basis functions (LBFs) should be mutually orthonormal [9] within the observation window (T) to perform CLSD-LE. To meet this condition, however, T needs to be much larger than the slowest decay to ensure that the LBFs, fIRF, and the derivatives of LBFs are 'sufficiently close to zero' at t ~T (t being the time delay with respect to the laser pulse). This would require shinning pulsed lasers with a low duty-cycle, reducing the efficiency of photon collection.
There are two major contributions in this paper. First, we introduce new criteria for selecting LBFs according to the residual level of Laguerre expansion instead of the mutual orthonormalities between LBFs. This will allows LSD-LE to be applicable even when T is comparable to the largest lifetime component. Second, the selection criteria are expanded to study bi-exponential decays for more general FLIM-FRET applications instead of only for diagnosis applications where single-exponential approximations might not produce enough contrast. To demonstrate the performances, we will test the proposed approaches on both synthesized data and realistic FLIM data.
Theory and method
In a time-domained FLIM experiment the measured fluorescence decay y(t) is the convolution of the fIRF h(t) and iIRF I(t): where (t) where b l (k; ) is determined by the Laguerre dimension L and the scale , and c l is the l th expansion coefficient. It is well known that LBFs form an orthonormal basis set only when N→∞. Previous studies concluded that the parameters L and should be chosen such that the LBFs and their corresponding derivatives should be 'sufficiently close to zero' at t ~T [9]. This condition would need a much larger T (compared to the largest lifetime component; 2 in this case) by using a pulsed laser with a lower duty cycle. In fact the expansion of fIRF with LBFs is simply a fitting problem where the optimal criteria should be the extent to which the sum of squared errors (SSE) can be minimized regardless of the orthonormalities between the LBFs used within the observation window 0 ≤ t ≤ T. Here, we define the normalized SSE (NSSE) for the fitting as: Minimizing NSSE h would be a straightforward way to assess the performances for fitting an fIRF with different lifetimes by Laguerre expansion. The Laguerre scale determines the rate of the exponential asymptotic decline of LBFs. The fIRF with a small lifetime prefers a small , whereas the one with a larger lifetime prefers a larger . When the field of view contains fluorescence decays with a wide range of lifetimes, a strategy to find the optimized should be in place. To facilitate the discussions, we rewrite the LSD-LE equations here. Substituting Eq. (4) into Eq. (2), we can obtain The deconvolution is to estimate c l , and Eq. (6) can be rewritten as . =+ yV c (7) With linear optimization, we have Equation (8) is called OLSD-LE. The main problem of OLSD-LE is that it causes overfitting easily when a higher L is applied [9]. To avoid this a smaller L was often suggested, however a smaller L is not able to resolve small lifetimes and therefore lowers the lifetime dynamic range. Liu et al. introduced a constrained LSD-LE called CLSD-LE to overcome this overfitting problem, and a higher L can then be applied [9]. Here only the conclusion is given, where ˆ is the solution to non-negative least square problem The deconvolutions and the computations of f D and 1, 2 τ are all curve-fitting problems. As Eq.
(5), to assess their performances, the NSSE of (k), D (k) and E (k) are defined as Figure 1 shows the flow diagram summarizing how the performances of OLSD-LE and CLSD-LE on bi-exponential decays (0 < f D < 1, 0.1ns ≤ 1 ≤ 0.9ns and 2ns ≤ 2 ≤ 3ns) were assessed in four steps, highlighted in different colours. The flow diagram can also be applicable to realistic FLIM data by replacing the synthesized data with the measured data.
Symthesized FLIM data analysis
Firstly, LBFs were chosen to ensure that a given NSSE h , Eq. To reduce instability (ill-conditioned V T V deteriorates stability), computation and overfitting (for OLSD-LE), the smallest L within the shadowed area is suggested. Therefore, Fig. 2(b) shows that the optimal LBFs for OLSD-LE has L = 12 and = 0.924, whereas Fig. 3(b) suggests L = 16 and = 0.912 for CLSD-LE. Obviously CLSD-LE needs a larger L and is slightly more complicated computationally. In general, LSD-LE methods need to specify L and properly for robust analysis. Usually the lifetime range in the field of view can be known before experiments; this information can be used to obtain new residual plots similar to Figs. 2 and 3. For a given residual requirement, L is suggested to be as small as possible to ensure a faster analysis speed. On the other hand, to improve the resolvability for the smaller lifetime 1 , it is required that L cannot be too small. The selection of L is a trade-off between the speed and the lifetime resolvability, whereas determines the accuracy of the fitting. For these reasons L = 16 and = 0.912 and L = 12 and = 0.924 are chosen for CLSD-LE and OLSD-LE, respectively.
Secondly, the synthesized decays, y(k), were generated according to the paramters listed in Table 1. There are nine different h(k), with (f D , 1 ) = (0.8, 0.2ns), (0.8, 0.5ns), (0.8, 0.8ns), …, and (0.2, 0.8ns) respectively, generated from Eq. (3), and nine corresponding y(k) (k = 1,2,…,9) were generated from Eq. (2). Thirdly, CLSD-LE (L = 16, = 0.912), OLSD-LE (L = 12, = 0.924 and L = 16, = 0.912) were applied to y(k) to obtain the recovered fIRF, D (k), and NSSE y and NSSE hD were used to assess the performances as shown in Fig. 4 where axis x corresponds to k in y(k). Being different from the previous analysis in Fig. 3 where Poisson noise was not included. This analysis shows how a larger L is more likely to cause overfitting after Poisson noise sources are included. In this analysis, 500 Monte Carlo simulations were performed for each y(k). Because of overfitting, µ NSSE,y for OLSD-LE (L = 16) is larger than µ NSSE,y for OLSD-LE (L = 12). Although µ NSSE,y for OLSD-LE (L = 12) is almost equal to that for CLSD-LE, µ NSSE,hD of OLSD-LE is in general larger than that of CLSD-LE, showing that CLSD-LE performs better and produces a D (k) closer to h(k). Finally, LSE is applied to D (k) to obtain f D and 1, 2 τ . Figure 5 shows NSSE hE for CLSD-LE and OLSD-LE. Again, µ NSSE,hE for CLSD-LE is smaller than that for OLSD-LE. It shows that all E (k) obtained by CLSD-LE are closer to h(k), giving much better estimations. Figure 6 shows the performances of the estimated f D and 1, 2 τ . All µ x (x = f D or 1, 2 τ ) obtained by OLSD-LE and CLSD-LE are close to x r and all Variance are bigger than the corresponding Bias 2 , suggesting that both methods are effective. And x and Variance for CLSD-LE are smaller than those for OLSD-LE, indicating that CLSD-LE is more robust against the noise. Figure 6 shows the performances of f D and 1, 2 τ : (a) and (b) for f D , (c) and (d) for 1 , and (e) and (f) for 2 . Figures 6(a), 6(c), and 6(e) show that with a fixed f D a larger 1 gives less precise estimations, and with a fixed 1 a reduced f D gives less precise 1 but more precise 2 . These trends are reasonable. Figures 6(b), 6(d), and 6(f) show that although OLSD-LE produces less biased results and each case is Variance-limited, CLSD-LE produces smaller variances and therefore performs better in all cases. x (x = f D , 1 , or 2 ), N C is the photon count; it is used to characterize the photon efficiency of an algorithm [37]) and the bias ( x/x) using Liu's CLSD-LE and our CLSD-LE. Figures 7(b), 7(d) and 7(f) shows that our CLD-LE has comparable or better F-value performances than Liu's CLSD-LE for 2 = 2.5 (T/ 2 = 4). However, Figs. 7(a), 7(c) and 7(e) shows our CLSD-LE has superior bias performances. Liu's CLSD-LE needs to meet the requirement that the LBFs should be orthonormal and therefore the largest they can use is 0.877 when L = 16 (in order to compare with our method). A lower usually contributes a larger bias. To demonstrate how the ratio T/ 2 affects Liu's CLSD-LE, we reduced T/ 2 to 3.3 by setting 2 = 3ns. Figure 7 shows that Liu's CLSD-LE has worse bias performances in all parameters, whereas the proposed CLSD-LE has similar bias performances as the previous example (T/ 2 = 4). The F-value of Liu's method seems smaller for 1 and 2 , but this should not be misled to conclude that its photon efficiency is better [38]. Instead, it is due to the seriously biased estimations [38]. Compared with Liu's approach, the proposed CLSD-LE performs more consistently.
Real FLIM data analysis
The proposed method was also tested on two-photon FLIM images of Cy5-ssDNA-GNRs labelled Hek293 cells. The images are for evaluating the endocytosis of gold nanorods (GNR) in living cells. The detailed synthesis of GNR-based RNA nanoprobes can be found elsewhere [39]. In brief, GNRs were functionalized with thiolated oligonucleotides (ssDNA) labeled with Cy5 through ligand exchange and salting aging process. After the incubation with Cy5-ssDNA-GNRs, Hek293 cells were washed and fixed with paraformaldehyde. Two-photon FLIM experiments were performed on an LSM 510 confocal microscope (Carl Zeiss) using the SPC-830 TCSPC acquisition system (Becker & Hickl GmbH). A Ti:sapphire laser (Chameleon, Coherent) was used (at 800 nm) to generate laser pulses with a duration less than 200 fs. The timing resolution of the TCSPC is 0.039ns, and measured histograms with 256 time bins (T = 256 × 0.039 = 10ns) were recorded. in the figure shows 1 histograms within (0, 0.5ns), and it explains why a larger L is required for resolving the lifetimes of GNRs ( 1 < 100ps). The discrepancy in 2 histograms between the proposed and the Liu's CLSD-LE is due to the fact that a large number of pixels show no energy transfer and contain a larger 2 around 3ns. For Liu's CLSD-LE, the lower T/ 2 (~3) limits its resolvability for 2 (unable to resolve 2 > 3ns) causing misinterpretation that there is energy transfer at these pixels. This observation is in good agreement with Fig. 7(e), T/ 2 = 3.3. In Fig. 8(d), there is a population of pixels showing 1 < 100ps, indicating that there is energy transfer between GNRs and Cy5. For Liu's CLSD-LE, however, a smaller L (L = 8) results in biased estimations of 1 , not able to allocate GNRs (green curve). The maximum can be applied is 0.877 (for L = 16) for Liu's CLSD-LE to meet the orthonormality requirement (note that it is only quasi-orthonormal). This lower leads to a bigger bias in 2 . For our methods, although there is a slight discrepancy between CLSD-LE and OLSD-LE, they are still be able to provide similar contrast. Compared with our previous report [37], the results show that considering the iIRF in the analysis would improve locating GNRs. The results also show that the proposed CLSD-LE and OLSD-LE produce similar results, and both work robustly even when the ratio T/ 2 is less than 4. Unlike previously reported LSD-LE [22-29] and BCMM [37] requiring a much larger T/ 2 or extra bias correction procedures (for BCMM), the proposed method can reduce the acquisition time per measurement. Figure 8(f) also shows that our CLSD-LE and OLSD-LE produce similar f D histograms, whereas for Liu's CLSD-LE a smaller T/ 2 causes biased f D estimations, see Fig. 7(a). The analysis results show that the proposed OLSD-LE and CLSD-LE are effective and have potential to be used to analyze FLIM-FRET data, with the latter showing better performances.
Conclusion
We presented new criteria to choose LBFs for LSD-LE based only on how close the Laguerre expansion can approximate the fIRF. Different from the conclusion suggested by previous studies, the proposed criteria do not need to consider the mutual orthonormalities between LBFs. The new criteria do not require that the LBFs and the corresponding derivatives to be close to zero at the end of the measurement window, and they allow using a smaller T/ 2 ratio and therefore reducing the acquisition time per measurement. We applied this upgraded method to analyzing bi-exponential decays and its performances (on both CLSD-LE and OLSD-LE) were accessed and compared against the original CLSD-LE. The results show that both the upgraded CLSD-LE and OLSD-LE can be applicable to our studies, but the former performs slightly better. Both synthesized and realistic experimental FLIM data show that the proposed CLSD-LE has better performance than the original CLSD-LE when T/ 2 is small and suggest that the proposed CLSD-LE can be an effective tool to analyze bi-exponential FLIM-FRET data. It can be further extended to study multi-exponential decays in the future. The proposed methods should be able to encourage wider applications of fast FLIM technologies and gold nanoparticles for cancer therapy [37,[39][40][41]. | 3,901.8 | 2016-06-14T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Extraordinary Transport Characteristics and Multivalue Logic Functions in a Silicon-Based Negative-Differential Transconductance Device
High-performance negative-differential transconductance (NDT) devices are fabricated in the form of a gated p+-i-n+ Si ultra-thin body transistor. The devices clearly display a Λ-shape transfer characteristic (i.e., Λ-NDT peak) at room temperature, and the NDT behavior is fully based on the gate-modulation of the electrostatic junction characteristics along source-channel-drain. The largest peak-to-valley current ratio of the Λ-NDT peak is greater than 104, the smallest full-width at half-maximum is smaller than 170 mV, and the best swing-slope at the Λ-NDT peak region is ~70 mV/dec. The position and the current level of the Λ-NDT peaks are systematically-controllable when modulating the junction characteristics by controlling only bias voltages at gate and/or drain. These unique features allow us to demonstrate the multivalue logic functions such as a tri-value logic and a quattro-value logic. The results suggest that the present type of the Si Λ-NDT device could be prospective for next-generation arithmetic circuits.
For the last two decades, several types of novel-functional electronic devices have been proposed and demonstrated on a variety of device architectures so as to huddle up the limitation of conventional complementary metal-oxide-semiconductor (CMOS) devices [1][2][3][4] . For example, one of the most promising scheme is the negative-differential transconductance (NDT) and the negative-differential resistance (NDR) devices, in which quantum mechanical characteristics (e.g., resonant tunneling [5][6][7][8][9] , single-electron tunneling [10][11][12][13][14][15][16][17][18] , band-to-band tunneling [19][20][21] etc.) and/or ambipolar carrier actions [22][23][24][25] are implemented. In the operation point of view, the NDT/NDR devices exhibit the extraordinary transfer-and/or output-characteristics. Namely, the devices show a current or a voltage oscillation peak at the specific bias point. This enables us to demonstrate some of astonishing functionalities beyond the binary logic system. For instance, multiple logic functions [26][27][28] , multivalued logics [29][30][31] , and stochastic data processes 32 are prominent representatives that can put a step closer to the future electronic computing system. Furthermore, since the usage of the NDT/NDR device allows a high-speed operation of the electronic circuit system (e.g., high-frequency oscillators [33][34][35] , high-speed multiplexers 36,37 , and fast logic switches 38,39 etc.), exploiting the high-performance NDT/NDR devices could be of major importance in the next-generation ultra-large-scale integration technology. To realize highly-functional NDT/NDR devices, many of emerging materials (e.g., carbon nanotube 40 , graphene [8][9][10][11] , molybdenum disulfide 7, 12 , single molecule 41 etc.) and semiconductor nanostructures have been employed in such a prospective concept of the device scheme. Regardless of the extensive efforts made to replace Si, however, technical and scientific knowledge accumulated on Si still can offer an advantage for rapid innovations 42,43 . These backgrounds prompt a systematic study on highly-functional Si NDT/NDR devices that are not only compatible to CMOS technology but also reliable for high reproducibility.
In light of this, we have fabricated and characterized the Si NDT transistors that can be utilized for next-generation multivalue arithmetic circuits. In this article, we report data on the extraordinary characteristics of the high performance Si NDT transistors, which were fabricated using a CMOS-compatible device fabrication
Experimental Details
The NDT devices were fabricated in the form of the gated Si p + -i-n + ultra-thin body (UTB) metal-oxidesemiconductor field-effect transistor (MOSFET) on a silicon-on-insulator (SOI) substrate (t BOX ≈ 300 nm) (left-hand-side panel of Fig. 1(a)). To construct such a device structure, we used the undoped (100) Si layer (n hole ~ 5 × 10 15 cm −3 ) of the SOI substrate as a starting material. For convenience, we refer the undoped Si layer as i-Si. For the formation of the UTB channel, first, the i-Si layer was thinned down to ~20 nm by successive thermal oxidation and chemical deoxidation. Next, the channel areas (W: 0.3-2.0 μm, L: 2.8 μm) were patterned by using conventional lithography techniques (see the right-hand-side panel of Fig. 1(a)). For further thinning of the SOI thickness (<10 nm), thereafter, we carried out local oxidation of silicon at the channel regions. During this step, ~5-nm-thick gate oxide was created; hence, the thickness of the UTB channel became less than 5 nm while that of the source/drain remained thick enough to minimize parasitic resistances. To prevent the gate leakage, we subsequently deposited an additional silicon dioxide layer (t ox ≈ 20 nm) through the low-pressure chemical vapor deposition method. Then, the p + -type drain (p ~ 10 20 cm −3 ) and the n + -type source (n ~ 10 20 cm −3 ) were formed by ion implantation of BF 2 + and P + , respectively. Finally, the n-type polycrystalline Si gate and the Al electrode were constructed via conventional MOSFET fabrication processes. The electrical properties of the Si p + -i-n + UTB-channel MOSFETs were measured at room temperature by using a Keysight B1500A device parameter analyzer and an Agilent DSO-6104A oscilloscope system. Figure 1(b) shows the drain current vs. gate voltage (I D -V G ) characteristic curves at room temperature of the fabricated Si p + -i-n + UTB-channel MOSFET. Under the drain-source voltage (V DS ) of 0.3 V, the device clearly exhibits an N-shape transfer characteristic (i.e., NDT effect) with a Λ-shape peak at V G = 0-|−0.5| V. For convenience, we refer this peak as a Λ-NDT peak in the present study. The full-width at half-maximum (FWHM) of the Λ-NDT peak is less than 170 mV, and the peak-to-valley current ratio (PVCR) is greater than 10 4 . Such a sharp and prominent Λ-NDT feature can be of great benefit for the high-speed analog circuits [33][34][35] and the novel functional digital circuits [26][27][28][29][30][31][32] . As a primary task, thus, understanding the physical mechanism of the clear NDT effect is essential for more feasibility and reproducibility.
Results and Discussion
We, therefore, firstly explain the transport mechanism of the device to help understand operation schemes of our NDT transistor. Figure 1(c-h) illustrate the carrier transport behaviors of the gated p + -i-n + Si-UTB transistor under various bias conditions. At thermal equilibrium, a large built-in potential would be formed at the junction between the drain and the channel because the lightly-doped pchannel becomes an n-type due to the band-bending effect from the work-function difference between the n-type polycrystalline-Si gate (Φ gate ~ 4.0 eV) and E F labeled in each band diagram denote the conduction band minimum, the valence band maximum, and the Fermi level, respectively. and the p − -channel (Φ ch ~ 4.94 eV for p − Si). (Fig. 1(c)). In addition, a small hump would be formed at the junction between the channel and the source because of the difference in electron concentrations at the n-channel and the n + -source. The potential barrier at each side will be slightly lowered when the forward bias voltage is applied to the drain-source (i.e., V DS1 > 0) ( Fig. 1(d)). At this bias point, despite no gate bias (i.e., V G1 = 0), a small current can flow through the channel because of carrier recombination and weak diffusion at both p + -n and n-n + junctions, respectively (e.g., Point D in Fig. 1(a)).
Here, one can easily create the NDT feature by changing the magnitude of |V G | because the gate driving force is very strong in the UTB-based MOS stack (i.e., explicit control of the accumulation-depletion-inversion modes by |V G | in the UTB-channel MOSFETs) 44,45 . For example, when applying a negative gate voltage (i.e., V G2 < 0), I D will start to increase because −|V G2 | reduces the electron concentration at the channel and eventually gives rise to the increase in diffusion/drift currents through the source-channel-drain ( Fig. 1(e)) (e.g., Point E in Fig. 1(a)). When the magnitude of |−V G | further increased (i.e., V G3 ≪ 0), however, I D will drastically decrease because −|V G3 | accumulates plenty of holes in the channel. Namely, |V G3 | will increase the potential barrier height at the channel-source (i.e., p-n + ) junction as large as the diffusion/drift action could be inhibited ( Fig. 1(f)). As a result, the reverse saturation would occur at the channel-source junction; hence, I D will rapidly decrease at V G3 (e.g., Point F in Fig. 1(a)). Such a sudden drop of I D causes a Λ-NDT phenomenon in the present type of the NDT transistor.
Through keeping on increasing −|V G | (i.e., V G4 ⋘ 0), the electrons can transfer from the source to the channel via band-to-band tunneling (BTBT). In other words, the BTBT event will occur under −|V G4 | because large −|V G | populates the channel with abundant hole carriers as much as the depletion width becomes thin enough to allow BTBT at the p + -n + junction ( Fig. 1(g)). At this bias stage, I D will significantly increase due to both the hole drift at p + -p + and the electron tunneling event at p + -n + (e.g., Point G in Fig. 1(a)). All of the above allow the gated Si p + -i(p − )-n + UTB-channel MOSFET to exhibit the N-shape transfer characteristic in the negative V G region. At the positive V G region (i.e., V G5 > 0), the value of I D would remain low (e.g., Point H in Fig. 1(a)) because +V G -induced electrons in the channel increases potential barriers at the drain-channel (i.e., p + -n) junction ( Fig. 1(h)).
Here, we point out the statistical uncertainty of BTBT at the higher |V G | region (e.g., at |V G | ≫ |−2| V in Fig. 1(a)). To perform BTBT, in fact, four necessary and sufficient conditions must be simultaneously satisfied: (i) the occupied energy states should exist in the reservoir to supply charge carriers, (ii) the unoccupied states also should exist in the charge collection region, (iii) the tunnel barrier width should be thin enough to ensure a finite tunneling probability, (iv), the momentum must be conserved during tunneling events. When fabricating the integrated circuit, however, the BTBT probability in semiconductor junction devices would be different from each other because the above conditions are very sensitive to both the energy perturbation and the thermal fluctuation. As a result, the tunneling current would be nonidentical for every device, leading to a vague output in the integrated circuit.
On the other hand, the NDT effect at the Λ-shaped peak region (e.g., at V G = 0-|−0.5| V in Fig. 1(a)) is reliable and reproducible for every device because the behavior occurs on the basis of only gate-controlled ambipolar carrier actions at the junction areas (i.e., gate control of 'recombination → diffusion/drift → reverse saturation'), as discussed earlier. Furthermore, fast switching of the positive-to-negative differential transconductance at the Λ-shaped peak region is beneficial for future high-speed and functional circuit applications. Therefore, from now on, we emphasize the features of the Λ-NDT peaks, which can be effectively demonstrated and modulated by junction dynamics in the device.
From the Si p + -i-n + gated-transistors fabricated through the aforementioned procedures, more than 65% of the devices showed clear NDT characteristics at room temperature. As shown in Fig. 2(a-f), the devices clearly exhibit the Λ-NDT peak in their transfer characteristic curves. Regardless of the channel size (i.e., W/L), the Λ-NDT peak clearly appears at V G = |0-−0.5| V, while the peak current increases with increasing channel width. This verifies our NDT transistors to hold promise for future CMOS-compatible novel functional circuit applications. The magnitude of PVCR is no less than 10 4 for all devices, and the value of FWHM is~175 mV on average.
Since the junction-depletion characteristics depend on both the Fermi potential inside the channel and the built-in potential at the channel edge, one may expect that the Λ-NDT conditions (i.e., recombination → diffusion/drift → reverse saturation) can be modulated by controlling either of V G or V DS. We, accordingly, measured the I D -V G characteristics at various V DS to investigate the effect of bias conditions on the modulation of Λ-NDT peaks (Fig. 3). As the magnitude of +V DS increases, the peak current at the Λ-NDT region is exponentially increased because the large +V DS enhances the drift action at the source-channel-drain junction. In addition, the Λ-NDT peak position systematically shifts toward the lower |−V G | region with increasing +V DS (see also the inset of Fig. 3).
The precise control of NDT peaks in our Si p + -i-n + UTB MOSFET is quite similar to that in highly-functional single electron/hole transistors that were devised with ultra-small quantum dots (e.g., d dot < 5 nm) [13][14][15][16][17][18] . In this otherwise quantum nature-free NDT device (e.g., no quantum dot etc.), however, we explicitly demonstrated the systematic modulation of the Λ-NDT peak through only controlling the electrostatic junction characteristics. Namely, the position and the magnitude of the Λ-NDT peak can be precisely controlled through modifying the potential profile for the NDT condition 25 . For instance, when a lower |+V DS | is applied to the device, a larger |−V G | is necessary to accumulate plenty of holes in the channel for performing the Λ-NDT phenomenon (i.e., switching of 'diffusion/drift → reverse saturation' by |−V G |), and vice versa at a higher |+V DS |.
When using the NDT device for the electronic circuits, the values of PVCR and FWHM play key factors because those are closely related to both the on/off ratio and the switching speed of the device. Thus, we assess the dependences of PVCR and FWHM on the bias conditions. As can be seen from Fig. 4(a), the bias voltage of V DS strongly affects the value of PVCR. With increasing V DS up to 0.3 V, the magnitude of PVCR increases and reaches ~2 × 10 4 , whereas that monotonically decreases when V DS exceeds 0.35 V. This can be explained by the variation of the off-current upon varying V DS . When V DS is low (e.g., V DS ≪ 0.3 V), the built-in potential at the channel-source junction (V bi(c-s) ) is still high enough to cut off the carrier transport thorough the channel (i.e., off-current = very low) (Fig. 4(b)). In this case, since the on-current increases with increasing V DS (e.g., up to 0.3 V), the magnitude of PVCR becomes higher. When V DS is high (e.g., V DS > 0.3 V), however, the barrier height of V bi(c-s) is decreased as low as a few of electrons can flow from the source to the channel (i.e., off-current ≠ low) (Fig. 4(c)). In this case, the value of off-current becomes higher and higher with increasing V DS ; hence, the magnitude of PVCR decreases in spite of the increase in on-current at higher V DS . Different from the behavior of PVCR, the effect of V DS is insignificant on the magnitude of FWHM (Fig. 4(a)) because the capacitive coupling strength of the UTB gate stack is much stronger than that of the drain-channel-source junction.
Another important factor of the NDT device is its V G -tunable swing-slope (SS) at the NDT peak region because SS is a key parameter of the device performance to produce a high speed on/off operation upon the input signals. The dependences of SS values on V DS are shown in Fig. 5. The swing slopes at both the positive-and the negative-differential conductance regions (i.e., SS Nega and SS Posi ) show a similar behavior because those are mostly influenced by strong gate-tuning of V bi(c-s) (i.e., fast switching of on/off operations by V G in the UTB gate stack) (see the inset of Fig. 5). The best value of SS is ~70 mV/dec at V DS < 0.35 V, which is comparable to that in the state-of-the-art Si MOSFETs 46-50 . When V DS exceeds 0.4 V, however, the value of SS begins to increase because of the increased off-current at higher V DS , as discussed above. Figure 6 shows the I D -V D characteristic curves of the device at various V G near the Λ-NDT peak region. At V G = 0 V, the device exhibits a typical diode-like feature because the p + -i-n + junction is formed along the drain-channel-source region. As the magnitude of |−V G | increases up to |−0.4 V|, the turn-on voltage decreases and the on-state current increases because -V G would induce hole accumulation in the channel and could reduce total V bi along drain-channel-source (i.e., p + -p-n + ). When |−V G | is further increased (>|−0.5| V), however, the turn-on voltage rapidly increases because the large magnitude of |−V G | would accumulate more holes inside the channel area; hence, total V bi would increase particularly at the junction between channel and source (i.e., p ++ -n + ). In addition, the device displays the current staircases (CSs) at V G = −0.5-−0.7 V (see the inset of Fig. 6) due to the suppression of carrier conduction at the NDT region. As |V G | increases, the range of CS becomes wide, and the current level of the plateau goes down. Namely, the knee position of CS shifts stepwise toward the lower V D and the lower I D .
The stepwise-shifts of both NDT peaks and CS plateaus are useful for the circuit application of the NDT device because it can provide multiple operation points for logical functions at a wide range of voltages 28 . Such remarkable tunabilities of NDT and CS can be traced at a glance by measuring the charge diagram of the device. As can be seen from the contour plot of I D as functions of both V G and V DS (Fig. 7), both the Λ-NDT and the CS characteristics are systematically modulated by V G and V D . For example, at the fixed V DS (e.g., V DSx ), the color of I D is changed along −|V G | direction (i.e., white → gray → black → gray → white). This corresponds to the −|V G |− dependent change in the current level of I D , indicating the appearance of the Λ-shaped I D peak (i.e., Λ-NDT). As the magnitude of V DS increases, the Λ-NDT region is extended toward the A direction. The extended Λ-NDT region is fairly long and inversely cuspidal, where the stepwise shifts of the Λ-NDT peaks and the CS plateaus occur, as confirmed in Figs 3 and 6. Thanks to the appearance of the extended Λ-NDT peak region, one can choose many of the operation points from a unit device for the demonstration of multivalue logic functions. For example, when using our NDT device as a one-transistor logic gate, two input-bias voltages (i.e., V IN1 = V G and V IN2 = V D ) can be selected at specific bias points for demonstrating different multivalue logic functions (see also Fig. 8(a)). Following this way, as depicted in Fig. 7, a tri-value and a quattro-value logic functions can be chosen as possible examples of one-transistor multivalue logics. Figure 8(b) and (c) display the measured transient waveforms of the tri-value and the quattro-value logics, respectively. The voltage output (V OUT ) clearly reveals a sequential count function of the multivalue upon varying V G (=V pulse1 ) and V D (=V pulse2 ). Although the output-voltage level is quite low because of the low current level at the NDT region, we believe that the multivalue logic functions can be effectively used for future highly-sensitive low-power arithmetic circuits. Finally, we briefly state the speed limit of the NDT-based multivalue logic circuits. In our device, the channel conductance near the Λ-NDT peak is in the order of 10 s nS, which corresponds to the junction resistance (R j ) of a few of hundreds MΩ. In addition, the junction capacitance (C j ≈ 1/2·q/kT·τI DQ 51 , where k is the Boltzmann constant, T is the environmental temperature, I DQ is the driving current at an operation point, and τ is the carrier lifetime) is determined to be ~2 fF, when assuming τ = 10 −7 s 51 . Furthermore, since the gate capacitance (C g = W•L•k ox• ε 0 /t ox 51 , where k ox is the relative dielectric constant of SiO 2 , ε 0 is the vacuum permittivity, and t ox is the thickness of SiO 2 ) is ~1.2 fF for the present device (W = 0.3 μm, L = 2.8 μm, t ox = 25 nm), the time constant (≈R j C j ) of our Λ-NDT device can be estimated to be less than 0.1 μs. We can, therefore, deduce the intrinsic speed of the device to be no less than several tens of MHz. Although the intrinsic speed limit seems little low, the implementation of high-mobility device architectures (e.g., nanowire-or nanosheet-channel MOSFETs with a gate-all-around stack [52][53][54][55] to the present type of the NDT device can be the next step to improve the speed of the NDT-based one-transistor multivalue logic circuits.
Conclusion
The NDT devices were fabricated in the form of the Si p + -i-n + UTB-channel MOSFETs. The devices clearly showed a Λ-shape NDT peak, at room temperature, with the extremely large PVCR (>10 4 ) and the small FWHM (<170 mV). These features were universal for multiple devices that had been fabricated using an identical method (i.e., yield ~65%). The best value of SS at the Λ-shape NDT peak region was ~70 mV/dec. In addition, the Λ-NDT peaks were confirmed to be effectively modulated through the control of the junction characteristics by changing only V G and/or V DS . Owing to the systematic modulation of the Λ-NDT peaks, we successfully demonstrated the multivalue logic functions (e.g., tri-value and quattro-value logics) on a single device as a one-transistor multivalue logic gate. These may offer potential applications for low power/high speed multivalue logics beyond the ordinary binary logic system. | 5,110 | 2017-09-11T00:00:00.000 | [
"Materials Science"
] |
Rebound effects may jeopardize the resource savings of circular consumption: evidence from household material footprints
The circular economy model aims to reduce the consumption of virgin materials by increasing the time materials remain in use while transitioning economic activities to sectors with lower material intensities. Circular economy concepts have largely been focussed on the role of businesses and institutions, yet consumer changes can have a large impact. In a more circular economy consumers often become users—they purchase access to goods and services rather than physical products. Other consumer engagement includes purchasing renewable energy, recycling and using repair and maintenance services etc. However, there are few studies on whether consumers actually make these sorts of consumption choices at large scale, and what impacts arise from these choices on life-cycle material consumption. Here we examine what types of households exhibit circular consumption habits, and whether such habits are reflected in their material footprints. We link the Eurostat Household Budget Survey 2010 with a global input-output model and assess the material footprints of 189 800 households across 24 European countries, making the results highly generalizable in the European context. Our results reveal that different types of households (young, seniors, families etc) adopt different circular features in their consumption behaviour. Furthermore, we show that due to rebound effects, the circular consumption habits investigated have a weak connection to total material footprint. Our findings highlight the limitations of circular consumption in today’s economic systems, and the need for stronger policy incentives, such as shifting taxation from renewable resources and labour to non-renewable resources.
Introduction
Global material consumption has continued to increase in recent decades, with growth accelerating faster during the 2000s (Schandl et al 2017). Given deep concerns surrounding unsustainable resource use, the circular economy has been suggested as an alternative to the traditional linear model of production, consumption and disposal. Circular economy approaches aim to decrease the virgin material inputs and the waste material outputs by slowing, closing and narrowing both material and energy loops, while maintaining economic growth (Ellen MacArthur Foundation 2013, Geissdoerfer et al 2017. The circular economy has a strong emphasis on the role of private sector and new business models (Geissdoerfer et al 2017, Camacho-Otero et al 2018, Manninen et al 2018. However, individual consumers can support circularity through their consumption choices.
The role of the consumer in the circular economy has been discussed from several perspectives. The dominant perspective is to shift the role of the consumer towards that of a user (Ellen MacArthur Foundation 2013, Tukker 2015, Ghisellini et al 2016. Instead of ownership, circular economy approaches highlight 'collaborative consumption' (Belk 2014), 'product-service systems' (Mont 2002, Tukker 2015 and 'access-based consumption' (Bardhi and Eckhardt 2012). In all these models, consumers have access to the needed goods and services, but do not own them. Online and mobile platforms have increased the possibilities of collaborative consumption (Belk 2014, Perren andGrauerholz 2015), but traditional rental and leasing services can also contribute (Ellen MacArthur Foundation 2013, Tukker 2015. In addition to collaborative consumption, consumers can promote a circular economy by choosing products that are designed for longevity and recyclability, using maintenance and repair services, sorting and recycling their waste, replacing fossil fuel -based energy sources with renewables, and much more. However, there are few large-scale studies on whether consumers make circular consumption choices in practice, and whether these habits depend on socioeconomic characteristics or the level of urbanisation. Urbanisation has been suggested to increase the potential of sharing- (Fremstad et al 2018) and circular economies (Su et al 2013, Ghisellini et al 2016 due to the spatial proximity of businesses and people in cities. Previous empirical studies on circular consumption behaviour have focused on the barriers and motivators of consumer action (Camacho-Otero et al 2018). Yet, the review of Camacho-Otero et al reveals studies lack a direct connection to the actual environmental impacts of consumption. Particularly absent are holistic indicators that assess overall environmental impacts including rebound effects. An important holistic indicator is the environmental footprint (Steinmann et al 2017, Wiedmann andLenzen 2018). An environmental footprint captures the life-cycle environmental impacts caused by the production of goods and services and allocates these impacts to the end-consumer. Steinmann et al (2017) highlight that even relatively simple resource footprints (e.g. water, energy, material) can be highly representative of environmental damage.
An intrinsic benefit of footprint methods is that they include rebound effects (Ottelin 2016). Rebounds originate when environmental actions cause monetary savings or require investments, which leads to changes in other types of consumption. Depending on their direction and strength, rebound effects can either increase or decrease the level of environmental impacts on net (Font van der Voet 2014, Ottelin 2016). Rebound effects in circular economy have been theorized (Zink andGeyer 2017, Figge andThorpe 2019), and shown in practice for individual products (Makov and Font Vivanco 2018). However, there are no previous studies concentrating on household level rebound effects related to circular consumption.
While the concept of the circular economy does cover energy and greenhouse gas emissions, its focus is on material cycles (Haas et al 2015, Geissdoerfer et al 2017. For this reason, we use the consumer material footprint here. Several studies have examined consumer material footprints (e.g. Lettenmeier et al 2014, López et al 2017 but they are not as widely studied as consumer carbon footprints. Different types of indicators have been used under the term 'material footprint' . These include the 'material input per unit of service' (MIPS) -method (Lettenmeier et al 2014, Laakso and Lettenmeier 2016, Buhl et al 2019, and environmentally extended input-output (EE IO) analysis (López et al 2017, Pothen and Reaños 2018, Jiang et al 2019. MIPS is based on process life cycle assessment and includes unused raw material extraction (RME) (e.g. waste rock in mining and logging residuals). EE IO analysis is another life cycle method that covers upstreams more comprehensively but is less accurate at individual product level (Piñero et al 2018). EE IO studies sometimes include unused RME but not uniformly. Including the unused RME can increase material footprints significantly . However, it can be misleading, because the amount of the unused RME does not necessarily correlate well with the environmental damage caused (Wiedmann et al 2015, SI), making comparisons between countries or different groups of consumers less meaningful. In this study, we follow Giljum et al (2014), Wiedmann et al (2015) and Ivanova et al (2016), and define material footprint as consumption based RME, including only materials taken into the direct use of the economy. In addition, we focus on household consumption alone, and exclude public consumption and investments.
Previous studies on consumer material footprints have focused on the relationship between various socioeconomic factors and the footprints (Lettenmeier et al 2014, López et al 2017, Pothen and Reaños 2018, Buhl et al 2019. Junnila et al (2018) is perhaps the only consumer material footprint study framed specifically with circular economy. They test the impact of reduced ownership on material-and carbon footprints of Finnish consumers. However, sustainable consumption more generally has been discussed and examined by many consumer material footprint studies. For example, Buhl et al (2019) examine the impact of environmental attitudes on German material footprints. Laakso and Lettenmeier (2016) provide an interesting experimental study including five Finnish households. They study how the material footprints of these households are reduced through various efforts, such as vegetarian diets and reduced driving. Yet, there is a lack of largescale studies investigating the impacts of circularity on material footprints.
In this study, we aim to fill these gaps by examining what types of households exhibit circular consumption behaviour, and how this is reflected in their material footprints. In other words, we combine the analysis of circular consumption patterns with the material footprint analysis, thus providing new insights that either analysis alone could not deliver. Furthermore, we analyse the connection between selected circular consumption indicators and material footprints, and examine what sorts of rebound effects may occur. The study is based on Eurostat's Household Budget Survey (HBS) 2010 and covers 189 800 households in 24 European countries. We combine the HBS with the global multiregional input-output (MRIO) model Exiobase 2015.
We aim to answer the following questions: (1) What household types exhibit (a) circular-and (b) linear consumption behaviour? (2) Is circular consumption associated with lower material footprints? and (3) Are there significant rebound effects related to the found circular consumption habits?
Research design
The research questions were addressed with three different analyses (figure 1). First, we examined the relationship between socioeconomic variables and circular-and linear consumption behaviour. To do this we defined circular-and linear consumption indicators based on circular economy literature and the Eurostat HBS in 2010. In particular, we are interested in how life stage (young, families with children, seniors etc) is related to consumption habits. In addition, we covered education, age, gender and the degree of urbanisation in the analyses. Secondly, we created a material footprint model, and analysed whether circular consumption features of different household types are reflected in their material footprints. Thirdly, we studied the connection of selected circular consumption habits to consumer material footprints, and examined potential rebound effects. We used multivariable regression analysis as the main method of analysis in all phases.
In the following sub-sections, we first present the used research material and material footprint model. Second, we describe the process of selecting suitable indicators for circular-and linear consumption. The selection was based on circular economy literature but limited by data availability. Third, we present the regression models and variables used in the consumption behaviour analyses (based on expenditure data alone). Finally, we describe the research settings and regression models used in the material footprint analyses, covering the relationship of socioeconomic variables, the degree of urbanisation, and circular consumption indicators with material footprints.
Research material
The study is based on two datasets: Eurostat's HBS in 2010, and a global MRIO model, Exiobase 2015(Tukker et al 2014. The HBS includes detailed household expenditures, and information on household characteristics, residential location and socioeconomic status across EU member states. The main purpose of the survey is to provide general information about consumption and living conditions in the EU region. The HBSs are conducted voluntarily by member states around every five years. Since they are voluntary, member states themselves decide how to organize data collection. Thus, despite aiming to harmonise survey data between member states, there are still inconsistencies, which should be considered when using the survey data and interpreting results. The total sample size of the HBS 2010 is 275 000 households across 26 countries. However, due to data limitations, here we calculate material footprints for 189 800 households across 24 European countries. The country specific sample sizes and country abbreviations are provided in table A1 in the appendix.
Environmental MRIO models are based on national accounts. They include monetary transaction matrices between countries and economic sectors, and satellite accounts for environmental indicators. Here we select Exiobase due to its high sectoral resolution, and because of its European focus. Exiobase 2011 is publically available at: www.exiobase.eu/. However, in this study we use a more recent version, Exiobase 2015, which reflects better current production technologies. Exiobase includes 44 countries and 5 'rest of world' regions, 200 products, and numerous different environmental indicators. The aggregate indicator for 'Domestic Extraction Used' alone is divided into 227 different materials. However, for the purpose of the study, we summed these to one indicator.
Material footprint model
Material footprints can be calculated by using environmentally EE IO analysis , Wiedmann et al 2015. EE IO model is used to calculate the material intensities (kg/€) of economic sectors or specific products. The material footprint of a product can then be calculated by multiplying its price with the corresponding material intensity. In this study, the 200 different Exiobase products were matched with the COICOP classification (Classification of Individual Consumption by Purpose) as used in the HBS. The concordance matrix was constructed by following Ivanova et al (2016), with small modifications. Some Exiobase categories used by Ivanova et al have no household final demand in the 2015 Exiobase model used in this study. We replaced these with suitable categories that have (see the supplementary material (available online at stacks.iop.org/ERL/15/104044/mmedia) for the concordance matrix). We used consumption category specific inflation coefficients (Eurostat 2020a) and price statistics (Eurostat 2020b) to transform the intensities of different sectors from 2015 to 2010 euros, and from basic prices to purchaser prices, in order to match them with the HBS data. As a result, our material footprint model is based on the economic structure and technologies in 2015, but consumption behaviour in 2010, because the Eurostat HBS 2015 was not yet available when the study was conducted. There have probably been some small changes in consumption behaviour from 2010 to 2015, but this is unlikely to affect our main findings. Following Giljum et al (2014), Wiedmann et al (2015) and Ivanova et al (2016), we used the consumption-based domestic RME, excluding unused materials, as the material footprint. The materials include biomass, fossil fuels, metal ores, and non-metallic minerals. We further exclude the material footprint of public consumption and investments, because these are not possible to allocate fairly to individual households without additional data. The unit of analysis in our study is the individual consumer (per capita).
Construction materials posed an issue because while its material intensity is generally quite high there is no suitable match for it in the HBS. Unlike the HBS of some individual countries, Eurostat's HBS does not include information on housing type, living space (m 2 ), or building materials. It only includes the expenditure on rentals and imputed rentals, housing energy and housing maintenance. Due to this data limitation and since the focus of this study is to compare different households, rather than estimate the overall material footprint, we choose not to use an average material footprint of construction for all households, or any other proxy. Consumer material footprints presented here will therefore be somewhat lower compared to previous studies. Because of this limitation, we could not test the connections between housing related circular consumption habits and material footprints. However, Junnila et al (2018) provide some previous results on these.
Selecting indicators for circular-and linear consumption
We used circular economy literature to identify key circular actions that can be translated into consumer behaviour. In addition, we identified linear, 'Take-Make-Dispose' , actions (see table 1). Most importantly we rely on two previous literature reviews by Ghisellini et al (2016) and Geissdoerfer et al (2017), who reviewed 1031 and 362 studies on circular economy respectively. In addition, we put emphasis on the Ellen McArthur foundation's report 'Towards circular economy' (2013), which is highly cited in this field. Thus, these three references are specifically cited in table 1 regarding the characteristics of circular-and linear consumption.
In this study, we matched COICOP consumption categories with the identified characteristics of circular-and linear consumption (table 1) in order to create practical indicators to be used in the regression analyses. We found matching consumption categories for most of the identified characteristics, but not all. The COICOP classification, used broadly for HBSs around the world, does not provide information about the quality of the purchases. Thus, there is no information about whether the products are designed for longevity, have a green product label or are bought second-hand. There is also no information about households' waste sorting and recycling. These areas should be seen as a priority for addition in both the COICOP classification and in expenditure surveys if we are to increase our understanding of environmental consumption behaviour.
Based on table 1, we created the following indicators for circular-and linear consumption behaviour (respective COICOP categories in parenthesis). Many of these consumption categories are relatively small, and there are a lot of households for which there is no expenditure in these categories. Thus, these indicators were used as dummy (binary) variables, meaning that 1 corresponds to having expenditure in the category, and 0 corresponds to having no expenditure in the category. However, for maintenance, meat products, services and tangibles, we used a continuous variable (expenditure It should be noted that these indicators are not exhaustive and represent only a small portion of potential consumer actions. Nonetheless, they cover several aspects of circular economy. Repair, hiring, refurbishing, maintenance and rental services are most clearly circular as defined by previous literature on circular economy. Here we consider public transport as part of collaborative and access-based consumption. Since the production of vegetarian food is much more resource and environmentally efficient than the production of meat products (Tukker et al 2011, Hallström et al 2015, Scherer and Pfister 2016), we consider a vegetarian diet as circular-, and the consumption of meat products as linear consumption. Furthermore, we use lumped services as one indicator for circular consumption. Although not all services are circular in the sense that they would directly substitute the use of products, the expenditure in services reduces the overall expenditure on products (assuming constant total expenditure). However, transport services are not included in the services here. Particularly, car rentals, and the repair and maintenance of cars, are not included in the services, nor in the subcategory 'repair and hiring services' . The used division of different consumption categories is provided as supplementary information.
Regression models for circular-and linear consumption
In order to examine the socioeconomic drivers of the selected circular-and linear consumption indicators, we used a multivariable regression analysis. We created two sets of models. With the first we examined the connections of life phase and the degree of urbanisation to consumption. With the second, we analysed education and gender, and used household size and age as control variables. Since life phase is usually a combination of household size and age, we did not include it in models that included household size and age. However, we added the degree of urbanisation in both sets of models to observe whether the models yield similar results (they did, which suggests that life phase is an appropriate variable to cover both age and household size simultaneously).
The logit models (for binary consumption variables) used in the study are as follows: where P(expenditure on commodity n > 0) is the probability of having expenditure in a specific consumption category; F(z) = e z /(1 + e z ) is the cumulative logistic distribution; income is disposable income per capita; life phase, urban, household size (HHS), education, age (in 5 year classes), and country, are class variables; gender is a dummy variable (0 = male, 1 = female), betas are regression coefficients, and u is an error term. Controlling for the country controls the specific country characteristics related to different product prices, production technologies, etc, and also the differences in survey data collection (for more details, see Ottelin et al 2019).
The respective linear regression models used in the study are as follows: ln (expenditure on commodity n) We used STATA's survey settings in all regression analyses, including those on material footprints. Importantly this allows for using survey weights in the analyses since they are vital when large survey datasets are used (Ala-Mantila et al 2014, Ottelin et al 2019). These weights correct the demographic differences between the sample and the actual population. In the case of Eurostat's HBS, weights also take into account the different sample sizes of different countries, so that the actual EU averages can be analysed. The survey weights provided by the Eurostat HBS were used throughout the study. In addition, we multiplied the weights by the household size, because the unit of analysis in the study is individual consumer, not household as in the HBS.
In each analysis, we aimed for as large sample size as possible, but because of data limitations we had to exclude some countries from specific regression models. We excluded a country if its sample size for the model in question was below 50 households. In addition, we excluded countries from some models because of missing data (table A1 in the appendix). Excluded countries are noted in the results. We also calculated the variance inflation factors (VIFs) after each regression model to check for multicollinearity (VIFs above 10 are usually considered problematic). The VIFs for the variables of interest were below three in all cases. Germany and Poland had relatively high VIFs (5 to 6) in some models, but we found this acceptable given that the focus of the analysis was not on country comparisons.
In the case of waste management, there are significant differences between countries in data quality. In some countries, waste management services are part of rentals and/or other housing related payments, which may explain the lower data coverage. In order to get meaningful regression results, we divided countries into three groups based on the share of households who have expenditure in 'refuse collection' (COICOP 0442): (1) 80%-100% paid for refuse collection: CZ, DK, EL, ES, HR, CY, LV, LU, SI (2) less than 80% but more than 0% payed for refuse collection: BE, BG, EE, i.e., LT, HU, PL, PT, SK, FI, and (3) no data: DE, FR, IT, MT, SE, UK (the country abbreviations are provided in table A1 in the appendix). We studied groups 1 and 2 separately, and excluded group 3 from the waste analyses. The most relevant model for waste generation is the linear regression model for group 1, since this uses the richest data. In the case of logit models, it should be noted that there are likely to be other reasons aside from consumption habits for higher or lower likelihood of paying for waste management. For example, rentals may include waste management services.
The degree of urbanisation and the studied EU regions
The Eurostat's HBS includes a common variable for the degree of urbanisation, which was used here. It is based on local administrative boundaries. Areas are divided into cities (at least 500 inhabitants per km 2 ), towns and suburbs (100-499), and rural areas (<100). For the purpose of material footprint illustration (figure 3) we divided the studied countries into Northern Europe (DK, FI), Western Europe (BE, FR, UK, i.e., LU), Eastern Europe (BG, CZ, HU, EE, LV, LT, PL, SI, SK), and Southern Europe (ES, IT, EL, PT, HR, MT, CY). Sweden was excluded from most of the analyses, including figure 3, since it did not have the needed 'life phase' or 'education' variables. Germany was excluded from all material footprint analyses due to missing data on detailed consumption categories.
Comparison of material footprints
We conducted two separate footprint analyses. First, we compared the material footprints of different household types, and analysed whether the circular consumption habits of each household type are reflected in their footprints. Second, we examined the connection between selected circular consumption indicators and footprints. The selected indicators were the purchasing of repair and hiring services, public transport, and a vegetarian diet. To be exact, the 'vegetarian' diet used here is actually lacto-ovopesco vegetarian diet, meaning that it excludes meat, but may include fish, eggs, and dairy products. Even this loose definition of vegetarians gives a relatively small group of people: around 3% of the whole population.
We selected indicators that do not correlate heavily with income. Income is the main driver of expenditure, which is the main driver of material footprints, and thus either income or expenditure needs to be controlled for when the aim is to study the impact of other variables. Including an indicator that correlates strongly with income in a regression model that includes income would cause collinearity, making it impossible to interpret the results unambiguously.
We used expenditure as a control variable to compare households with similar levels of total expenditure. Thus, we avoid possible biases related to households who have underreported their consumption in the HBS. The downside is that the models do not capture real differences in savings rates either (Ottelin 2016).
The general regression model used in the material footprint analysis is as follows: ln (Material footprint) = β 0 + β E ln (expenditure) + β h Life phase h + β i Circular consumption indicator i + β j Country j + u where material footprint is the total material footprint per capita; expenditure is total expenditure per capita; the circular consumption indicator is a selected dummy variable; and the remaining variables are the same as defined above for the equations (1)-4. Finally, we reveal potential rebound effects by using illustrations and regression analysis. As explained by Ottelin (2016), it is important to control for other variables that can affect the environmental footprints, when the aim is to illustrate and estimate the rebound effects of specific environmental actions. Thus, in order to control for income and household type in the result figures, we used middle-income working-age (25-64 years) singles as a case group. We created country specific income groups, and the middle-income group includes the middle-income 50% of the case population. We report selected case countries that have particularly rich data regarding the tested circular consumption indicator in question. We also aimed for geographical balance. See tables a8 and a9 in the appendix for further details on the studied groups.
Relationship between socioeconomic variables and consumption habits
Most socioeconomic groups engage in both circularand linear consumption, but different groups adopt different circular features (see figure 2). No clear forerunners of circular consumption were found. Regarding household type, young (16-24 years) singles and couples show stronger circular consumption patterns than others, but they tend to consume more on tangibles and are more likely to purchase motor vehicles than older people without children. This could be because many of their goods are first-time purchases, including vehicles. At the same time, seniors (⩾65 years) consume more on repair and refurbishing services than any other household type, but they also spend more on meat products and waste management, suggesting higher waste generation. Families with children tend to consume a wide variety of products and services, but simultaneously, they get significant economies-of-scale benefits due to intrahousehold sharing, as highlighted by previous studies (Wier et al 2001, Ala-Mantila et al 2016. This is reflected by their higher likelihood of consumption in many (circular-and linear) consumption categories but lower expenditure overall.
Increasing income increases circular consumption by increasing the likelihood of consuming repair, hiring and refurbishing services, how much is spent on maintenance services, and services in general. However, the likelihood of rental living decreases with increasing income, and its connection to the level of public transport is weak. Income is also a significant driver of linear consumption, particularly motor fuels, air travel and tangibles. Surprisingly, its impact on the consumption of meat and on the likelihood of purchasing vehicles is low. Purchasing vehicles includes the purchases of second-hand vehicles here. Furthermore, increasing income increases spending on waste management services.
Increasing levels of education enhances circular consumption habits. Unlike income, it clearly increases the use of public transport. However, increasing levels of education increases driving and air travel too, which has significant environmental consequences. Gender differences are small compared to the other socioeconomic variables. Women seem to have more circular features in their consumption than men (such as using public transport, and rental and repair services), but they tend to spend slightly more on tangibles and are more likely to travel by plane.
Urbanisation is also connected to consumption habits. Previous studies find that cities may see increases in sharing due to their high concentration of households and businesses (Ala-Mantila et al 2016, Fremstad et al 2018). We find similar results to other studies that public transport and services in general are increased in urban regions, but also that urban residents are more likely to use repair and hiring services than rural residents. However, it is possible that it is more common for people to repair their own goods in rural areas and to lend items to neighbours for free. This type of behaviour would be in line with circularity and sustainability, but it is not captured by circular economy measurements, since neither activity is monetized. In the monetization of circular economy cities play the major role. However, our results reveal that cities also have downsides regarding the circular economy. Although a major concept of the circular economy is that leasing and hiring activities would decrease the need of ownership, city residents consume tangibles slightly more than suburban and rural residents, and their expenditure on waste management services is higher, despite the fact that some of the costs may be embedded in rentals.
Material footprints
The material footprints of households are mainly driven by income and household size (table 2). Families with children, and young adults (16-24 years) have the lowest material footprints per capita ( figure 3 and table 2). The lowest material footprint, 3.4 t per capita, is found among young families living in Eastern Europe (young families are those with one or more <5 year old children). Singles of workingage (25-64 years) have the highest material footprints, varying from 8.5 t per capita in Eastern Europe to 11.0 t in Southern Europe. Singles seem to have relatively higher material footprints (compared to other household types) in Eastern and Southern Europe than Northern and Western Europe. However, there are overall fewer singles in these regions, especially among under 30 year olds, and those who are single, have significantly higher income than other household types, which explains the high material footprints. In Northern and Western Europe, low income students concentrate in the group of singles, levelling the income differences.
The composition of consumer material footprints is quite similar across Europe: food plays a major role, followed by tangibles, housing energy, and private transport in most cases. Differences are larger in Eastern Europe, where housing energy causes almost half of households' material footprints due to a heavy reliance on coal energy. However, this is compensated for by lower material footprints in other sectors (due to lower income and consumption compared to other regions). In Northern Europe, rentals cause a larger material footprint than elsewhere, probably because heating energy is usually included in rental agreements. In Southern Europe, the role of private transport (including vehicle purchase, maintenance and motor fuels) seems to be particularly high. This is due to a higher sectoral material intensity rather than higher consumption compared to other European regions. Possible reasons for higher material intensity are lower prices and/or less efficient production chains.
Although material footprints are clearly much more dependent on income and household size than individual consumption choices, some interesting observations can be made, see figure 3. First, although young adults and families with children generally spend more on tangibles than other households when income is controlled (figure 2), this materially intensive consumption habit does not lead to higher material footprints overall. Similarly, although working-age singles generally spend more on services than other households, this does not lead to lower material footprints overall. When young adults and seniors are compared, the seniors' higher consumption of repair and hiring services is not well reflected in their material footprints of tangibles or services, but their higher consumption of meat products is clearly reflected in their higher material footprints of food. In addition, the high likelihood among young adults, single parents, and families to use public transport services appears to correlate with lower material footprints, particularly from private transport. The findings suggest that the impact of circular consumption habits on resource savings is not straightforward, and there may be rebound effects, as we will next examine more closely.
In terms of the connections between the studied circular consumption indicators and material footprints, the use of repair and hiring services does not imply a lower consumer material footprint (figure 4(a) and table 3). Although this is counter-intuitive, repair and hiring correlates with higher goods ownership and service use in general, which increases material footprints ( figure 4(a)). On average, consumers who use repair and hiring services have a 2% higher material footprint than consumers who do not when expenditure is controlled (table 3). This may be because of a rebound related to monetary savings from using repair and hiring services. On the other hand, it is possible that consumers who buy more products also need more repairing services. Since we use cross-sectional analysis here, the causal direction remains unclear. In any case, the result suggests that repair and hiring services are currently not substitutes for purchasing new products, at least not in large scale, which poses a challenge for circular economy.
The use of public transport decreases consumer material footprints by 4% on average (table 3), mainly due to reduced private vehicle ownership and use ( figure 4(b)). However, public transportation is generally much cheaper than owning and using private vehicles, and we find related rebounds. In Spain, Finland and France, consumers who use public transport, have a higher consumption and material footprint of services ( figure 4(b)). This probably relates to urban lifestyles-public transport services are mainly available in urban areas, where the supply of other services is also higher than in suburban and rural areas. Similarly, the consumption of 'other travel' , which includes public transport and holiday travel (transportation and miscellaneous consumption abroad), is naturally higher among consumers who use public transport. This is particularly true in Finland, where this offsets a large share of the benefits from decreasing private driving ( figure 4(b)).
Curiously, in the Czech Republic, the decreasing material footprint of transportation is offset by the increasing material footprint of housing energy (figure 4(b)), whereas in Spain, Finland and France, the material footprint from housing related consumption is lower among consumers who use public transport than among those who do not. The living space per capita is generally smaller in urban areas, but in the Czech Republic, the expenditure on gas, heat and electricity is higher among consumers who use public transport than those who do not, even though the income level is practically the same (table A9 in the appendix). Previously, Buhl et al (2019) have found that the material footprint of housing correlates negatively with vacations in Germany. They also found that environmentally conscious consumers have in general lower material footprints, except for vacations. These findings may also be related to the urban lifestyles. In sum, increasing use of public transportation can reduce material footprints, but the related rebounds can be significant, depending on the country. Among the tested consumption habits, a vegetarian diet is most clearly connected with a lower material footprint (figure 4(c), table 3). Laakso and Lettenmeier (2016) made similar findings related to reduced meat consumption. Consumers with a vegetarian diet have on average 64% lower material footprint of food consumption, and 23% lower total material footprint than their counterparts (table 3). The difference is also clear in the selected case countries in figure 4(c). There appear to be no significant rebound effects, potentially because a vegetarian diet may not reduce the overall costs of diets. However, in Cyprus and Spain, vegetarian consumers have a slightly higher material footprint of services than non-vegetarian consumers. This is mainly because of higher use of restaurant services. One possible explanation is that higher education reduces meat consumption (figure 2) and is also related to higher use of restaurant services.
Limitations of the study and suggestions for future research
The study has three main sources of uncertainty. First, the circular-and linear consumption indicators used here were chosen with a process that involved subjective decisions, and other researchers may have ended up with a different set of indicators. The used data caused limitations related to this. The Eurostat HBS includes limited information related to the environmental aspects of consumption. More detailed data on the quality of purchasers (longevity of products, green product labels, second-hand products etc) and the recycling habits of consumers would be needed for a deeper analysis on the impacts of circular consumption behaviour. In addition, studies on nonmonetized sharing and collaboration are called for (e.g. sharing among neighbours), since expenditure studies cannot capture this sort of behaviour. Second, the chosen environmental indicator, material footprint, has its inherent limitations (Fang andHeijungs 2014, Steinmann et al 2017). It sums up all materials regardless of the place of origin or type of material. In reality, the environmental impacts of RME vary between materials and locations. This is a very important issue for circular economy measurement: the circularity of some materials may be more important than the circularity of others with respect to environmental sustainability. The third main limitation is that the material footprint of construction of buildings and infrastructure is largely excluded due to data limitations (see the method section for details). In their recent study, Södersten et al (2020) highlight that including capital load in material footprints increases footprints significantly, particularly in real estate and other service sectors. Future studies could address the presented limitations with improved data collection and material footprint models. In addition, it would be good to collect longitudinal expenditure data in order to study causal relationships more rigorously.
Conclusions and policy implications
Here we examined what types of households exhibit circular consumption habits, and how circular consumption choices are connected to material footprints. We found no clear leaders in circular consumption. Instead, different types of households adopt different features of circular consumption, depending on age, life phase, gender, education etc. Furthermore, circular consumption choices do not necessarily lead to a lower material footprint. The use of repair and hiring services does not seem to decrease material footprints, and the use of public transport has significant rebounds in some of the studied countries. Among the studied circular and ecological consumption choices, a vegetarian diet has the clearest connection to lower material footprints. Overall, the results highlight that rebounds due to shifting consumption have a high potential to jeopardize the expected benefits of circular consumption.
Although consumption choices can potentially have a strong impact on environmental footprints, their impact in practice is often limited. Most consumers have no knowledge or understanding of rebound effects, and thus they may have high footprints despite being environmentally conscious in some areas of life (Ottelin et al 2017, Buhl et al 2019. Furthermore, even in the best case, consumers can only impact on their own purchases-not the economic flows after the purchase. A recent study by Greenford et al (2020) reveals that if the environmental impacts of labour (meaning the consumption of workers) are taken into account, there is actually little difference, whether we consume products or services.
Previous studies have highlighted potential rebounds in the circular economy from a production perspective (Zink andGeyer 2017, Figge andThorpe 2019). Here, we focused on household level rebounds related to constant household budgets. It should be noted that the circular economy fits within the green growth paradigm in the sense that it does not question the aim of continuous growth. Thus, in a circular economy, growing household budgets would be expected. As Zink and Geyer (2017) highlight, circular economy may actually lead to increasing overall production (and consumption), instead of substituting virgin materials with circulating materials. In order to avoid such a scenario, the use of virgin materials needs to be restricted, in addition to creating incentives to use secondary and renewable materials. For instance, the taxation of non-renewable resources should be increased, and taxation of renewable resources and labour should be decreased (Ellen MacArthur Foundation 2013, Ghisellini et al 2016. Fossil fuels should be phasedout systematically to avoid leakage effects (Le Quéré et al 2019). Other, non-monetary policies, such as green product labels and nudging, can also be used to support eco-efficiency and eco-design, and guide consumer choices (Ghisellini et al 2016, Lehner et al 2016, Geissdoerfer et al 2017. However, these should be seen as a complement to regulation and economic policy instruments, not as alternatives. It is often asked how rebound effects could be mitigated. However, this is not necessarily a meaningful aim. From the consumer perspective, a better aim would be to have equally low material (or any environmental impact) intensity (kg/€) for all products and services. In such a scenario, rebounds would always be 100%, and consumption choices would not make any difference from the environmental perspective. Although such an aim is practically impossible to achieve, it could be approached by the above-mentioned economic policies, and phase-out of environmentally most harmful economic activities.
Acknowledgments
The authors would like to thank Eurostat and the developers of the Exiobase model for providing the data used in the study. The study was supported by the Aalto University School of Engineering (grant 915530).
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors. | 9,443 | 2020-07-29T00:00:00.000 | [
"Economics"
] |
Human-centered specification exemplars for critical infrastructure environments.
Specification models of critical infrastructure focus on parts of a larger environment. However, to consider the security of critical infrastructure systems, we need approaches for modelling the sum of these parts; these include people and activities, as well as technology. This paper present human-centered specification exemplars that capture the nuances associated with interactions between people, technology, and critical infrastructure environments. We describe requirements each exemplar needs to satisfy, and present preliminary results developing and evaluating them
INTRODUCTION
Critical Infrastructure (CI), such as the water and rail sectors, are essential for day-to-day life.However, despite the attention given to parts of CI systems -such as water purification for water infrastructure, or train signalling in rail -there has been little work modelling the operating environments within which these parts are situated.Given the unforeseen circumstances that might arise due to complex interactions between people, technology, and the general environment, a security solution mitigating a risk in one type of CI system, may be inappropriate for addressing the same risk in another.
Specification exemplars are self-contained, informal descriptions of a problem in some application domain, and are designed to capture the harshness of reality (Feather et al. 1997).They can be used to promote research and teaching by introducing interesting and challenging problems, and provide a common model for evaluating solutions for the domain associated with the exemplar.Creating exemplars that address both of these needs can be difficult.For an exemplar to be useful, it needs to model different aspects of a problem, model a problem from different and potentially conflicting viewpoints, and deal with multiple sources of information.
Previous work by the authors (Faily et al. 2015) note that while specification exemplars focus primarily on modelling functional concerns, the nuances related to human issues are less easily modelled.By failing to model such nuances, exemplar users risk trivialising people and their work.In this paper, we present work designing and developing humancentered specification exemplars of nuanced CI environments.We describe five requirements for the exemplars before presenting preliminary results developing and evaluating them.
EXEMPLAR DESIGN PRINCIPLES
To address the issues in Section 1, we encapsulated five requirements into the design of each specification exemplar.
First, rather than being a textual description of a specific setting, each exemplar models the operating environment of a fictional CI company.Each model contains a goal model (van Lamsweerde 2009) representing the company's security policy and organisational constraints, asset models (Fléchais et al. 2003) describing the security properties associated with each asset, and floor plans of selected physical locations to provide context to how people and assets interact.Second, exemplars contain personas (Cooper 1999) of users in each environment, and tasks describing their typical work.Each task also contains information about how long it takes a persona to complete a task, how frequently the task occurs, how demanding the task is, and what conflicts may occur between different goals a persona might have.This information makes it possible to determine the impact that changing the environment might have on a persona's propensity for violating the security policy.
Third, each exemplar contains a selection of realistic vulnerabilities, attackers, threats, and risks specific to the type of CI system being modelled.By embedding these elements into the exemplar, one can see how vulnerabilities expose assets, and attackers realise certain threats to target assets by exploiting vulnerabilities.
Fourth, although exemplar models are static, model elements can be varied based on working contexts.
As such, a threat with a high likelihood in one context may be insignificant or non-existent in another.
Similarly, the same task carried out during the day might be more or less usable to a persona at night because the task might be truncated, or more stressful due to limited support in the event of problems.
Finally, exemplars are machine readable.For the purpose of our evaluation, exemplars were modelled as XML, to be compatible with the CAIRIS security design tool (Faily 2015).CAIRIS conforms to a metamodel for usable security (Faily and Fléchais 2010), enabling it to automatically generate visual models of how security, usability, and system elements interact with each other.For example, Figure 1 shows how tasks (blue ellipses) make use of certain assets (blue boxes) threatened or exploited by risks (red ellipses) within a given context of use.The model also shows the attackers and personas associated with each task and risk, and how usable personas find each task (different shades of blue).Although this model illustrates the complexity resulting from these different elements, CAIRIS provides facilities for filtering models, and generating documentation for some or all of the exemplar model.This allows exemplar users to focus on some aspect of a larger problem.
PRELIMINARY RESULTS
We developed two specification exemplars conforming to the requirements in Section 2; these are based on a fictional UK water company (ACME Water), and a fictional rail company in Southeast Europe (Balkan Rail).Each exemplar is grounded in empirical data from real CI companies.The data from ACME Water is drawn from two previous studies designing security for the water industry (Faily and Fléchais 2010;Faily and Fléchais 2011), and the data for Balkan Rail was collected specifically for the purpose of creating the exemplar.Both specification exemplars are publicly available (BANCIS Project Team 2016a,b).
The Balkan Rail exemplar is still under development, but the ACME Water exemplar provided context when evaluating a social engineering serious game (Beckers and Pape 2016).Personas, assets, and floor layouts were used as part of this game, where players were expected to devise social engineering attacks of people at ACME Water.
Although successful, adoption of the exemplar was initially difficult due to the vast amount of information in the model.Consequently, when using the exemplar, one person was designated as a 'plausibility oracle', and consulted the exemplar in CAIRIS to determine the impact of proposed attacks on ACME Water.Future work will present a detailed evaluation and critical reflection of both exemplars. | 1,360.8 | 2016-07-11T00:00:00.000 | [
"Computer Science"
] |
Frequency-Dependent Streaming Potential in a Porous Transducer-Based Angular Accelerometer
This paper presents a transient model of streaming potential generated when fluid flows through a porous transducer, which is sintered by glass microspheres and embedded in the circular tube of a liquid circular angular accelerometer (LCAA). The streaming potential coupling coefficient (SPC) is used to characterize this proposed transient model by combining a capillary bundle model of a porous transducer with a modified Packard’s model. The modified Packard’s model is developed with the consideration of surface conductance. The frequency-dependent streaming potential is investigated to analyze the effect of structure parameters of porous media and the properties of the fluid, including particle size distribution, zeta potential, surface conductance, pH, and solution conductivity. The results show that the diameter of microspheres not only affects bandwidth and transient response, but also influences the low-frequency gain. In addition, the properties of the fluid can influence the low-frequency gain. Experiments are actualized to measure the steady-state value of permeability and SPC for seven types of porous transducers. Experimental results possess high consistency, which verify that the proposed model can be utilized to optimize the transient and steady-state performance of the system effectively.
Introduction
Compared with angular displacement and velocity, angular acceleration manifests more efficient characterization of the high-order properties of complex systems. The angular accelerometer based on direct measurement of angular acceleration is widely used in rotation control, navigation, and vibration detection [1]. Recently, a new liquid circular angular accelerometer (LCAA) [2][3][4][5][6] was developed based on inertial liquid mass. Compared with other types of angular accelerometers, such as a molecular electronic transducer (MET) based on four electrodes [7][8][9][10], MEMS [11], heat transfer [12,13], and electromagnetic [14], LCAA possesses a balanced performance within the frequency range, accuracy, and space consumption.
The structure of LCAA was introduced by Cheng [4]. The porous transducer is a critical component of LCAA, which is sintered by glass microspheres under high temperature, and it is the only primary difference when compared with MET-based on four electrodes [7][8][9][10]. According to the principle of LCAA [2], the system of LCAA can be divided into two subsystems including a fluidic system and a molecular electronic system. Although, plenty of works on the fluidic system have been conducted and different models for fluid systems have been proposed [3][4][5], there are still many problems in establishing a theoretical model of the molecular electronic system, which is based on the electrokinetic effect [15] generated when fluid flows through a porous transducer. Laboratory experiments were designed to measure the steady-state streaming potential coupling coefficient (SPC) [15][16][17][18][19]. The theoretical model of SPC in porous media was studied and concluded as the Helmholtz-Smoluchowski equation (H-S equation) [15], giving a linear relationship between streaming potential and applied pressure difference. In addition, researchers also conducted in-depth research on factors affecting streaming potential in porous media, mainly analyzing solid-liquid materials and the macroscopic and microscopic parameters of porous media [15][16][17][18][19]. Several equivalent models of porous media were developed to analyze the influence of structure parameters on the electrokinetic effect, specifically the capillary bundle model [20][21][22] and the pore network model [23][24][25]. In order to obtain the mathematical model of a molecular electronic system in LCAA, the dynamic characteristics of streaming potential are intrinsic parts of the theoretical analysis.
For better understanding of the electrokinetic effect in various porous media, laboratory experiments have been investigated [26][27][28][29], which were used for qualitative analysis. Without the consideration of surface conductance [16,18], Packard [30] derived an expression of transient SPC in a circular tube by utilizing the Navier-Stokes equation. In order to simplify the calculation of the Bessel function, Reppert [31] rewrote Packard's model based on the thin electrical double layer (EDL) assumption, which was corrected by Tradif [26] later. Pride [32] obtained an expression of transient SPC in complex porous media by combining the Navier-Stokes equation with the Maxwell equation, and it was modified by Tradif [26]. After analyzing the assumptions and constraints of the four proposed models [33], Packard's model was finally selected.
In this paper, we present a modified Packard's model considering surface conductance, calculated by Revil's model [16]. Combining the capillary bundle model of a porous transducer in LCAA [34], the modified Packard's model is extended to the capillary bundle, and the dynamic model of streaming potential in porous media is established and employed to analyze the influence of structure parameters such as the particle size distribution (PSD), and solution properties like the zeta potential, surface conductance, pH, and solution conductivity on dynamic performance. In addition, experiments to measure steady-state permeability and SPC were actualized for seven types of porous transducers with different PSD. Compared with the permeability predicted by the Kozeny-Carman model [35,36], the permeability estimated by the capillary bundle model possesses higher accuracy, specifically lower than 15%.
System Structure and Principle of LCAA
The physical prototype and structure diagram of LCAA are illustrated in Figure 1. The main structure [3] is a circular tube made of glass, and the fluid mass flows in this tube. The porous transducer is a critical component of LCAA, which is sintered by glass microspheres and embedded in a circular tube. The principle of LCAA is shown in Figure 2. The circular tube together with the transducer move with external angular acceleration input. The pressure difference between the ends of the porous transducer results from the relative motion between fluid and transducer. After that, streaming potential is generated due to the EDL on the interface between the liquid and solid. The structure of EDL is illustrated in Figure 3. According to the principle, LCAA can be divided into two parts, specifically a fluidic system and a molecular electronic system. The theoretical model of fluidic system was proposed by Cheng [3], which was used to analyze the influence factors such as wave speed, the structure parameter of the circular tube, and the permeability of the transducer. In this paper, a transient model of the molecular electronic system is developed, and its influence factors are analyzed.
Theoretical Analysis of the Transient Model of the Electrokinetic Effect
This section concerns three aspects of the transient model in the molecular electronic system based on the electrokinetic effect. Specifically, we modify a transient model of the circular tube, establish a transient model of the molecular electronic system, and analyze its influence factors.
Modifying the Transient Model in the Circular Tube
The electrokinetic effect in a molecular electronic system can be characterized by the streaming potential coupling coefficient C sp , which is the ratio of the streaming potential to pressure difference applied to the flow path. The frequency-dependence of C sp (ω) has been studied for capillary tubes [26,30,31] and porous media [26,32].
• Packard's model: Packard [30] proposed a transient model of streaming potential E sp (ω) for a single circular tube by neglecting surface conductance in EDL and charge distribution in the diffusion layer. Based on the Navier-Stokes equation, E sp (ω) is given by: where E 0 = εζ/µσ 0 is the steady-state streaming potential [30], ∆P(ω) presents the applied pressure difference, ε and µ is the dielectric constant and the dynamic viscosity of the fluid, respectively, ζ means the zeta potential of the bulk fluid, σ 0 is the fluid conductivity, r c denotes the radius of a circular tube, and k is given by: where i 2 = −1 and ω and ρ are the angular frequency of the external input and the density of the fluid. J n denotes the n th -order Bessel function. The streaming current is given by: where I 0 = −2πεζr 2 c /µL c , and L c is the actual length of the capillary tube.
• Modified Reppert model:
Packard's model was rewritten based on the thin EDL assumption, aiming at simplifying the calculation of the Bessel function [31]. After that, it was corrected by Tardif [26], expressed as: In order to study the frequency-dependent streaming potential for the porous transducer, Packard's model is modified as: where σ s is the surface conductivity.
Capillary Bundle Model of the Porous Transducer
The steady-state model of the molecular electronic system is presented by employing the capillary bundle model [34], in which the porous transducer is equivalent to a bundle of circular capillaries with the same tortuosity τ c , as shown in Figure 4. The capillary radius distribution (CRD) can be calculated from PSD, and both are lognormal distributions, respectively presented by ln d p ∼ N µ d , σ 2 d and ln (r c ) ∼ N µ c , σ 2 c [34]. The parameters of CRD can be derived from PSD [34], specifically as: where Θ = √ m 2 F 2 /3, m is the cementation index of the porous media, and F = φ m presents the formation factor. φ means the porosity of porous media, calculated by [34]: The steady-state permeability is derived based on the capillary bundle model [34], presented as, where the expression of the original moment is:
Transient Model of the Electrokinetic Effect for the Capillary Bundle
Based on the capillary bundle model, the transient streaming current of the porous transducer can be expressed by: Considering the dynamic balance of this transient flow, a conduction current is formed to balance the transient streaming current, given by, where Σ s means surface conductance. Thus, the transient streaming potential is expressed by: In order to simplify the integral operation of the Bessel function, a uniform distribution of the capillary bundle with the same porosity is utilized, and the equivalent radius of the capillary is obtained by: The equivalent number of capillaries is given by: Adopting (14) and (15), (13) is rewritten as: Thus, SPC is derived as:
Analyzing the Influence Factors of the Electrokinetic Effect
According to (17), the parameters of PSD and the properties of the solution are both influence factors for the calculation of C sp , which are analyzed as follows.
Effect of the Structure Parameters of the Porous Transducer
The main parameter of the porous transducer is the PSD of the microsphere, which is obtained by measurement. The CRD of the capillary bundle is derived by (6) and (7). The equivalent mean radius of the capillaries is expressed by (14). In addition, the permeability of porous media not only affects the fluidic system, but also influences the electrokinetic effect in the molecular electronic system. A transient model of permeability [4] is expressed as, (18) where Λ is the characteristic length of the porous transducer, which can be calculated by the mean diameter d p of PSD, specifically as: Adopting (18), the transition frequency is obtained by:
Effect of the Properties of the Solution
Without considering the effect of temperature, three other properties of the solution mentioned in (17) are considered with dependence on the conductivity of the solution, including permittivity, zeta potential, and surface conductance.
Compared with the correlation fitted by Worthington [37], an empirical correlation [38] is derived and utilized to convert fluid conductivity into electrolyte concentration C f . This expression is valid for the solution with C f ∈ (0.0001, 0.1)M.
The permittivity of electrolyte solution can be calculated by employing the following equation, In addition, the dynamic viscosity was selected to be a constant.
For a brine solution, the dependence of the concentration for the zeta potential and surface conductance was established by Revil [16]. The zeta potential is modeled by, with:
Experiments
As shown in Figure 5a, a SurPASS electrokinetic analyzer [4] is employed to investigate the hydrodynamic and electrokinetic characteristics of the porous sample. The measuring unit illustrated in Figure 5b is constructed by: Seven types of porous transducers with different PSDs were utilized for the test, where the PSDs were controlled by sieves with different sizes. These transducers were made by pouring an amount of glass microspheres in a cylindrical mold and sintered under high temperature. Transducer size and mass were measured, then washed by pure water, and dried in a microwave to avoid the influence of impurities. Finishing the above-mentioned preparation, the transducer was embedded on a cylindrical measuring slot, and solution flowed through it, which was selected as a 0.0115 mol/L sodium chloride solution (NaCl). The structure parameters including CRDs are listed in Table 1, where µ d and σ d are the parameters of the microspheres. The porosity φ was obtained by the weighing method, and θ was given by √ m 2 F 2 /3. µ c and σ c are parameters of CRD, which were calculated by (6) and (7) separately. Meanwhile, the parameters of the NaCl solution are included in Table 2. Steady-state SPC C 0 was directly measured, which was used to calculate the permeability K 0 of the transducer [4].
Results and Discussion
In this section, the figures are used to discuss the effect of the porous transducer and electrolyte solution on the electrokinetic process. Meanwhile, the proposed transient model of the molecular electronic system is verified and employed to design the LCAA. Finally, some strategies are given to optimize the transient response and low-frequency gain.
Variation from Porous Transducer
Based on (6) and (7), the parameters of CRDs for different transducers are shown in Table 1. Specifically, the radius distribution of the capillary bundle is illustrated in Figure 6. The density of each radius is dimensionless, which was divided by the maximum value of the density. Thus, the peak value of each curve equaled one. In order to verify the capillary bundle model of the porous transducer, the permeability predicted by the capillary bundle model was compared with the values estimated by the Kozeny-Carman model [35,36]; while both predicting model were compared with experimental permeability. These results are illustrated in Figure 7. We can conclude that the capillary bundle model possessed higher accuracy when compared with the Kozeny-Carman model. The relative errors of the capillary bundle model for B1-B4 were specifically 10.83%, 9.70%, 11.29%, and 15.26%. Packard's model of the capillary was compared with three other models proposed for capillary or porous media. The parameters used in the modeling of transient C sp are presented in Table 3. The frequency dependence of C sp is illustrated in Figure 8. It can be seen that the magnitude-frequency characteristic of related models was basically the same, but the phase-frequency characteristic diverged.
The Pride model and Tardif model possessed a leading phase, while the lag phase was more reasonable in the physical system. In addition, the Reppert model was simplified with the assumption of thin EDL. Table 3. The parameters used in modeling C sp with different methods in Figure 8.
Parameter
Value Parameter Value Moreover, Packard's model was utilized to estimate C sp for different capillary radii, which is shown in Figure 9. The related parameters are given in Table 4. As shown in Figure 9, the amplitude of C sp was normalized by εζ/µσ 0 . Meanwhile, the transition frequency ω c reduced from 3.0166 × 10 11 Hz-3.0166 × 10 5 Hz, as the radius of the capillary increased from 0.07-70 µm; while the effective radius of the capillary in porous transducer varied from 3-15 µm, as concluded from Figure 6. The effect on the amplitude of C sp was not obtained from Figure 9 due to neglecting the surface conductance. 3.0166 × 10 5 3.0166 × 10 7 3.0166 × 10 9 3.0166 × 10 11 Finally, Packard's model was selected in this paper and is modified with the consideration of surface conductance in the following.
Variation from the Electrolyte Solution
Surface conductance was concerned to establish the proposed transient model for different porous samples. The amplitude of SPC C 0 was calculated with surface conductance for different samples [39]. A reduced SPC C 0 /K 0 is plotted in Figure 10, where K 0 is the permeability of the related sample. As shown in Figure 10, the solid line presents the values calculated with surface conductance, while the dotted line was obtained without considering surface conductance. The experimental data for different samples were measured by Boleve [39]. We can conclude that Packard's model [30] overestimated C 0 for low salinity, especially for pure solvent. According to Figure 10a, the estimation error of the dotted line was about 100% for the electrolyte solution with a conductivity of 0.001 S/mol. As the fluid conductivity becomes more than 0.1 S/mol, the error can be neglected, which is consistency with Revil [16]. Comparing the results of samples with different mean particle diameters d 0 , the estimated error decreased with the increase of d 0 . Specifically, the error was about 25% for Sample S3 as fluid conductivity was equal to 0.001 S/mol, which is illustrated in Figure 10d.
The surface conductivity of sample σ s was given by σ s = 6Σ s /d 0 [39]. The relationship between σ s and d 0 is presented in Figure 11 for different surface conductances. Laboratory experiments were actualized as Σ s = 4 × 10 −9 S by Boleve [39], which are also presented in Figure 11. The results show that the surface conductivity of the sample dominated for samples with a small size of particles. Meanwhile, a positive correlation between surface conductivity and surface conductance was observed. Figure 10. The influence of surface conductance on steady-state SPC. (a) reduced steady SPC C 0 /K 0 for Sample S1a with the mean diameter as d 0 = 56 µm; (b) reduced steady SPC C 0 /K 0 for sample S1b with the mean diameter as d 0 = 72 µm; (c) reduced steady SPC C 0 /K 0 for sample S2 with the mean diameter as d 0 = 93 µm; (d) reduced steady SPC C 0 /K 0 for Sample S3 with the mean diameter as d 0 = 181 µm. Figure 11. The relationship between σ s and d 0 for different surface conductances: the red line is for Σ s = 6 × 10 −9 S; the back line is for Σ s = 4 × 10 −9 S; the blue line is for Σ s = 2 × 10 −9 S.
As observed from Revil's model of zeta potential (23) and surface conductance (25), pH is also an important property for modeling SPC in a molecular electronic system. In this paper, the effect of pH on steady SPC is analyzed, which is illustrated in Figure 12. All other parameters of the model are included in Table 2, except pH and fluid conductivity. The amplitude of zeta potential ζ decreased with increasing fluid conductivity σ 0 or reducing the pH of the solution, which is shown in Figure 12a. Ignoring the surface conductance Σ s , the steady SPC C 0 is plotted in Figure 12b and possessed the same trend as the zeta potential, which was also concluded by Glover [40]. Combining the surface conductance model illustrated in Figure 12c, the steady SPC C sp showed a different dependency on pH shown in Figure 12d. We should ensure that the pH of the electrolyte solution remains stable in physical applications. The zeta potential also contributed to the transient model of SPC, which was investigated with different values for the same transducer B3. The result is illustrated in Figure 13. It is obvious that improving the absolute value of zeta potential was the most effective way to optimize the amplitude-frequency characteristic of C sp (ω). Based on the conductivity dependence of the zeta potential as shown in Figure 12a, we needed to select a pure solvent with a larger zeta potential as the fluid mass of LCAA. In addition, the zeta potential effect on transient performance was not achieved by the proposed transient model (17).
The transient model of SPC for the transducer (17) was employed to investigate the influence of the structure parameters of the transducer and the properties of the solution. Meanwhile, the dynamic model of permeability (18) was also analyzed. As shown in Figures 14 and 15, they were different in conductivity. Figure 14 was obtained with σ 0 = 1 × 10 −7 S (like pure water), while Figure 15 was given by σ 0 = 115 mS used in the experiment. The results show that the amplitude of C sp (ω) increased with increasing equivalent diameter of microspheres in the transducer with the same fluid conductivity, resulting in a reduction of the bandwidth of the molecular electronic system. Besides, the amplitude of K(ω) also increased with the increase of the equivalent diameter of microspheres, while it reduced the transition frequency of the permeability in the fluidic system. For the same porous transducer, the amplitude of C sp (ω) showed a negative dependence on the conductivity of the electrolyte solution. Meanwhile, the effect on bandwidth cannot be concluded by (17).
The steady-state SPC C 0 for four types of transducers was directly measured. The relationship between C 0 and the equivalent radius of capillary is presented in Figure 16a. The experimental results were consistent with the predicted results shown in Figure 15a. Specifically, the steady-state SPC increased with the decrease of capillary radius with the conductivity as σ 0 = 115 mS. In addition, the transition frequency ω c was calculated by (20). The relationship between ω c and the equivalent radius of capillary is illustrated in Figure 16b. The variation trend of ω c was the same as shown in Figure 15d, which decreased as the increase of the capillary radius. Hence, there was a "trade-off" between the transition frequency and steady state SPC. We should design an appropriate PSD for the porous transducer to improve the performance of LCAA. Combining the transient model of SPC (17) in the molecular electronic system and the dynamic model of the fluidic system [3], we can optimize the low-frequency gain, bandwidth, and dynamic performance of LCAA. The following strategies can be employed to improve the performance indexes of LCAA as listed in Table 5. 1. Since wave speed is the most important parameter in a fluidic system, improving the wave speed can extend the bandwidth and optimize the dynamic response, while the low-frequency gain remains the same. At present, it can be achieved only by reducing the gas percentage in fluid and increasing the thickness of the circular tube wall, which both have high technical difficulty. 2. In engineering, we can change the radius of a circular tube to improve the bandwidth. However, there is a "trade-off" between low-frequency gain and the performance of transient response.
We need to select a suitable value according the requirement of the application.
3. Adjusting the PSD of the porous transducer, the transient response of the molecular electronic system and fluidic system both can be optimized, while the low-frequency gain of the molecular electronic system is deduced. 4. Reducing the inner radius of the circular tube can improve transient performance. 5. The zeta potential is the key property that can effectively increase the low-frequency gain for the molecular electronic system, which can be adjusted by changing the types of solvent or the conductivity of the electrolyte solution. Table 5. Performance indexes of the liquid circular angular accelerometer (LCAA) [3].
Conclusions
This paper presents a transient model of the electrokinetic effect generated in a molecular electronic system of LCAA. With the consideration of surface conductance, Packard's model is modified. Combining with the capillary bundle model of the porous transducer, the transient model of the electrokinetic effect is established for the porous transducer. With the application of this model, the effect of the porous transducer and electrolyte solution on dynamic performance is investigated.
Specifically, the low-frequency gain is improved by increasing the effective radius of the capillary, which is obtained by the PSD of the porous transducer or increasing the zeta potential. As for transient performance, it can be optimized by changing the PSD of the porous transducer. We should notice that there is a trade-off between bandwidth and low-frequency gain when adjusting the PSD of the transducer. Thus, we need to design the parameter of the transducer according to the requirements of the application. The experiments of the steady-state SPC and permeability for seven types of transducers were actualized, which verified the capillary bundle model and the proposed transient model. Furthermore, some data given by Boleve [39] were also adopted to investigate the effect from the properties of the electrolyte solution.
Finally, the strategies for optimizing the performance of LCAA are proposed by combining the transient model of the molecular electronic system and the fluidic system. These strategies can be employed to guide the design of LCAA. | 5,716.2 | 2019-04-01T00:00:00.000 | [
"Physics"
] |
Temporal Analysis of Gene Expression in the Murine Schwann Cell Lineage and the Acutely Injured Postnatal Nerve
Schwann cells (SCs) arise from neural crest cells (NCCs) that first give rise to SC precursors (SCPs), followed by immature SCs, pro-myelinating SCs, and finally, non-myelinating or myelinating SCs. After nerve injury, mature SCs ‘de-differentiate’, downregulating their myelination program while transiently re-activating early glial lineage genes. To better understand molecular parallels between developing and de-differentiated SCs, we characterized the expression profiles of a panel of 12 transcription factors from the onset of NCC migration through postnatal stages, as well as after acute nerve injury. Using Sox10 as a pan-glial marker in co-expression studies, the earliest transcription factors expressed in E9.0 Sox10+ NCCs were Sox9, Pax3, AP2α and Nfatc4. E10.5 Sox10+ NCCs coalescing in the dorsal root ganglia differed slightly, expressing Sox9, Pax3, AP2α and Etv5. E12.5 SCPs continued to express Sox10, Sox9, AP2α and Pax3, as well as initiating Sox2 and Egr1 expression. E14.5 immature SCs were similar to SCPs, except that they lost Pax3 expression. By E18.5, AP2α, Sox2 and Egr1 expression was turned off in the nerve, while Jun, Oct6 and Yy1 expression was initiated in pro-myelinating Sox9+/Sox10+ SCs. Early postnatal and adult SCs continued to express Sox9, Jun, Oct6 and Yy1 and initiated Nfatc4 and Egr2 expression. Notably, at all stages, expression of each marker was observed only in a subset of Sox10+ SCs, highlighting the heterogeneity of the SC pool. Following acute nerve injury, Egr1, Jun, Oct6, and Sox2 expression was upregulated, Egr2 expression was downregulated, while Sox9, Yy1, and Nfatc4 expression was maintained at similar frequencies. Notably, de-differentiated SCs in the injured nerve did not display a transcription factor profile corresponding to a specific stage in the SC lineage. Taken together, we demonstrate that uninjured and injured SCs are heterogeneous and distinct from one another, and de-differentiation recapitulates transcriptional aspects of several different embryonic stages.
Introduction processing techniques and reagents. Additionally, many of these comparisons are made at the level of mRNA, or in-vitro, making it difficult to accurately appreciate the in vivo de-differentiated phenotype, and how closely it recapitulates developmental SC programs. Here, we have described the transcriptional profile of developing and de-differentiated SC in vivo in a comparable and relevant manner. We conducted an extensive spatio-temporal analysis through five key stages of embryonic mouse development (E9.0, E10.5, E12.5, E14.5, E18.5), postnatal stages P7 and P65, as well as within the P65 nerve following acute injury, when SCs are actively acquiring a reparative state.
Through these studies, we identified distinct expression profiles for NCCs, NCC precursors, SCPs, iSCs, pro-myelinating SCs and mature myelinating/non-myelinating SCs, and demonstrated an underlying heterogeneity of the SC pool. Our findings also demonstrated that the de-differentiated SC is a unique SC subtype, distinct from any one developmental stage in the SC lineage. Given that this 'repair' SC subtype is the driving force behind efficient regeneration in uncompromised nerve injuries, methods to recapitulate this phenotype could be further investigated as a therapeutic avenue to treat chronic nerve injury and demyelinating disease.
Materials and Methods Animals
CD1, C57/BL6, and Sox2eGFP mice [46] mice were purchased from Charles River Laboratories (Senneville, QC) and Jackson Laboratory (ME, United States) and maintained in a 12 hr light cycle. Embryos were staged using the morning of the vaginal plug as embryonic day (E) 0.5. Pregnant females were housed individually after mating, and euthanized using cervical dislocation for embryo collection. Adult mice used for peripheral nerve harvesting were group housed before injury, and then singly housed with enrichment post-injury. These animals were euthanized using an overdose (0.1mL) of Sodium Pentobarbital (54.7mg/mL, Ceva Sante Animale). Animal procedures were approved by the University of Calgary Animal Care Committee in compliance with the Guidelines of the Canadian Council of Animal Care.
Embryo processing
Whole embryos were collected for stages E9.0 and E10.5, while only bodies were collected for stages E12.5, E14.5 and E18.5. For post-natal studies, sciatic nerves were harvested from the limbs of P7/P65 pups. The embryos and nerves were fixed in 4% paraformaldehyde (PFA)/1X diethyl-pyrocarbonate (DEPC) treated phosphate-buffered saline (PBS) for~4-20 hours at 4°C. The embryos and nerves were rinsed in DEPC-PBS and transferred to 20% sucrose/1X DEPC-PBS, and were kept overnight at 4°C. The embryos and nerves were then embedded using O.C.T™ (Tissue-Tek1, Sakura Finetek U.S.A. Inc., Torrance, CA) and stored at -80°C. For the injury study, five days after injury (P65) mice were sacrificed by overdose of sodium pentrobarbital (i.p.; CEVA, Sante Animale). Nerves were removed and fixed in 4% PFA for two hours and subsequentally placed in 30% sucrose overnight. The next day, nerves were embedded in O.C.T.™ compound, frozen on dry ice and stored at -80°C before cutting cryosections on a Leica cryostat (Richmond Hill, ON).
Surgery
For crush injury, P60 mice were anesthetized using isofluorane (5% induction and 2% maintenance) and then given a preoperative subcutaneous injection of 0.1 mL (0.03 mg/mL) buprenorphine. Hindlimbs were shaved and then cleaned twice with 70% EtOH followed by 10% providine iodine. On only the right hindlimb, the sciatic nerve was crushed at mid-thigh using #10 forcep for one minute. Muscle and skin were sutured back together (7-0 Prolene and 7-0 Silk, black braided; Ethicon Inc.) and buprenorphine was administered once a day for 4 days following surgery.
RNA in situ hybridization
RNA in situ hybridization was performed as previously described [47].
Microscopy and image processing
For the embryonic and P7 sections, images were captured with a QImaging RETIGA 2000R or QImaging RETIGA EX digital camera and a Leica DMRXA2 optical microscope using OpenLab5 software (Improvision; Waltham MA). P65 nerve sections (injured and un-injured) were imaged using an inverted epifluorescent microscope (40x objective with oil, z-stack, Axio Observer Research Microscope; Zeiss Observer.Z1). Negative controls were included to distinguish non-specific secondary antibody binding. The captured images were processed using Adobe Photoshop software. Quantification in the embryonic studies was restricted to the Sox10 + NCCs at E9.0, and Sox10 + cells populating the dorsal and ventral root and DRG, and the exiting spinal nerve at the remaining stages. Quantification in the injury studies was conducted using a minimum of three images taken distal to the crush site from three different tissue sections for each animal (n = 3 mice per group). Double and single-positive cells (protein of interest co-localizing with Sox10 + SCs and Sox10 + SCs) were manually counted using Adobe Photoshop software. An unpaired student's t-test was performed using Prism software (Graph-Pad) to determine whether there was a difference in protein expression of intact or actuely injured adult nerves (significance p<0.05).
Expression of Schwann cell lineage markers in neural crest cells
We first assessed the expression of SC markers in delaminating trunk NCCs at E9.0 ( Fig 1A). To reliably label NCCs, and at later stages, to mark peripheral cells fated for the glial lineage, we used Sox10 as a co-label in all marker studies. Sox10 is continually expressed in NCCs, SCs and satellite glia throughout development [26,28], and is required for the differentiation of all peripheral glia [28,49,50]. At E9.0, Sox10 was expressed in trunk NCCs delaminating from the neural tube, including those following both dorsolateral and ventral migratory routes ( These observations were consistent with previous reports documenting the activation of a Nfat transcriptional reporter [45] and the expression of Sox9 [26], AP2α [39], and Pax3 [51] in migrating NCCs. Of these markers, Sox9 induces a NCC phenotype [25], and its expression biases migrating NCCs towards glial and melanocyte lineage selection [26], whereas essential roles have only been documented at later developmental stages for the remaining transcription factors in the SC lineage: AP2α maintains a SCP fate, impeding the transition to an iSC [39], Pax3 regulates SC proliferation [52] and Nfatc4 acts synergistically with Sox10 to initiate Egr2 expression in SCs [45].
At E9.0, Etv5 was also expressed, but only in a very small number of Sox10 + NCCs (S1A-S1C Fig), which may be why previous reports have suggested that Etv5 is not expressed in the E9.0 neural crest [42]. Jun expression was also detected in a subset of NCCs, but instead of labeling cells in the ventral migratory pathway, it was primarily expressed in NCCs following a dorsolateral migratory route, which are pre-destined to a melanocyte fate (S1D-S1F Fig). In contrast, we did not detect the expression of Oct6 (S1G-S1I Thus, Sox9, AP2α, Pax3, and Nfatc4 are widely co-expressed with Sox10 in E9.0 NCCs following a ventral migratory route, albeit not in all SCs for AP2α and Nfatc4, whereas Etv5 is only expressed in a small subset of NCCs, and Jun instead marks dorsolaterally migrating NCCs.
Expression of Schwann cell lineage markers in migratory NCC precursors
By E10.5, NCCs have coalesced to form the DRG, which at this stage, are comprised of sensory neurons and migratory NCC precursors that are destined to become SCPs and satellite glial cells (Fig 1B). Emanating from the DRG are the dorsal and ventral roots, which coalesce to form the mixed sensory/motor spinal nerve. NCCs also give rise to a subset of multipotent cells called boundary cap cells that are located at the dorsal root entry zone and motor exit points. SCs populating the dorsal and ventral roots find their origin in these boundary cap cells, as do a few satellite glial cells [15]. At E10.5, Sox10 was expressed in boundary cap cells in the dorsal and ventral roots of the DRG, in presumptive SCPs, in satellite glia in the periphery of the DRG, and in migratory NCCs in the spinal nerve (Fig 3A-3C). Sox10 expression was for the most part excluded from the central DRG, where NeuN + neuronal cells are located (S2D and S2E Fig). Similar to their co-expression profiles in E9.0 NCCs, Sox10 was largely co-expressed with Sox9 (100±0% Sox9 -+ Sox10 + /Sox10 + cells; Fig 3A-3C' and 3P) and AP2α (100±0% AP2α + Sox10 + /Sox10 + cells; Fig 3D-3F' and 3P). Importantly, not all Sox9 + cells were Sox10 + , as Sox9 was expressed in a larger subset of non-glial cells (as seen in other stages as well), confirming that the antibodies are not recognizing epitopes shared between Sox family members. We further confirmed the specificity of the antibodies by demonstrating that Sox9 and Sox10 have distinct staining patterns in the CNS (S3A- S3F Fig). Pax3 also continued to be co-expressed with Sox10, however at reduced levels (50.1±7.4% Pax3 + Sox10 + /Sox10 + cells; Fig 3G-3I' and 3P), and a much smaller number of Sox10 + NCCs coalescing in the E10.5 DRG and in the ventral root expressed Nfatc4 (3.7 ±0.8% Nfatc4 + Sox10 + /Sox10 + cells; Fig 3J-3L' and 3P). In addition, Etv5 expression was initiated at E10.5 in Sox10 + NCC precursors in the DRG (83.9±6.3% Etv5 + Sox10 + /Sox10 + cells; Fig 3M-3O' and 3P), consistent with previous reports [42,53].
In vitro studies suggested that a block of Etv5 function in NCCs affects neuronal and not glial fate specification [54]. Consistent with these findings, Etv5 (S2J-S2L Fig) as well as AP2α (S2G- S2I Fig) were also expressed in the neuronal-rich central part of the DRG, where they were co-labeled with NeuN, a pan-neuronal marker. In contrast, Sox9 (Fig 3A-3C) was exclusively co-expressed with Sox10 in the DRG periphery, where presumptive peripheral glia are located. Sox9 was also expressed with Sox10 in the ventral root, developing spinal nerve, and in mesenchymal tissue between the DRG and somites (Fig 3A-3C). In contrast, Oct6 (S4A-S4E In summary, at E10.5, Sox9, AP2α, Pax3, Nfatc4 and Etv5 are co-expressed with Sox10 in presumptive NCC-derived glial cells in the dorsal and ventral roots, DRG and developing spinal nerve, and a subset of these cells have progressed to a SCP fate based on the co-expression of glial markers (data not shown).
Expression of Schwann cell lineage markers in Schwann cell precursors
By E12.5, the vast majority of NCC precursors destined for a glial lineage have differentiated into either SCPs or satellite glia. Morphologically, SCPs are distinguished from migrating NCCs as they associate directly with growing axon bundles, but they lack the basal lamina secreted by iSCs ( Fig 1C). SCPs are located proximal to the growing nerve tip, and they participate in compacting the nerves while also guiding axons to their targets [10]. Satellite glia can be partially distinguished from SCPs based on their location; satellite glia are in the DRG but are excluded from the nerves, whereas SCPs are found in both locations. However, because of the lack of specific markers, satellite glial cells are not easily distinguished from SCPs within the DRG, although they do have a more flattened nuclear morphology [55,56]. For simplicity, we use the SCP nomenclature for precursor cells for both satellite glia and SCs.
At E12.5, Sox10 expression was slightly more widespread in the DRG compared to E10.5, marking both peripheral and central DRG cells (Fig 4A and 4B). The extension of Sox10 expression into the central DRG did not include sensory neurons, as Sox10 was not coexpressed with NeuN at this stage (S5D- S5F Fig). Instead, Sox10 was expressed exclusively in presumptive peripheral glia, as previously suggested [26,28]. Sox10 was also expressed in the dorsal (Fig 4A and 4C) and ventral (Fig 4A and 4D) roots, where migrating SCPs are located. In co-expression studies, Sox10 continued to be highly co-expressed with Sox9 (99.8±0.2% Sox9 + Sox10 + /Sox10 + cells; Fig 4A- ). Most Sox10 + Etv5 + cells lined the periphery of the DRG and were likely satellite glial cells based on their flattened nuclei. Etv5 expression was also detected in a few SCPs in the dorsal and ventral root. In addition, Etv5 was co-expressed with NeuN + in DRG sensory neurons (S5M- S5O Fig). This data is consistent with previous reports indicating that Etv5 transcripts are detected in satellite glial cells and DRG sensory neurons [42,53], although this previous study did not detect Etv5 transcripts in the sciatic nerve [54]. In contrast to Etv5, Sox9 (S5A- In summary, E12.5 SCPs undergo a temporal shift in their expression profile, retaining the expression of Sox10, Sox9, AP2α and Pax3, as observed in E10.5 NCCs, while losing the expression of Nfatc4 and Etv5, and gaining the expression of Sox2 and Egr1. The initiation of Sox2 ( Fig 4Q-4T') and Egr1 (Fig 4U-4X') expression in E12.5 peripheral glia is consistent with previous reports documenting the expression of Sox2 in SCPs and iSCs [23] and Egr1 in SCPs [35]. However, Egr1 is also co-expressed with NeuN in the DRG (S5S-S5U Fig), indicating that it also labels sensory neurons. Of the genes that are newly expressed at this stage, Sox2 expression is upregulated in a subset of cells that develop into the PNS [23], and within the SC lineage, Sox2 regulates the differentiation of SCPs into myelinating SCs versus melanocytes [24], while Egr1 is considered a non-myelinating SC marker [35].
Expression of Schwann cell lineage markers in immature Schwann cells
As development proceeds, SCPs can either give rise to iSCs, or alternatively, endoneurial fibroblasts and melanocytes [18,19]. iSCs appear from E14.5 and persist until just before birth [10] ( Fig 1D). iSCs cluster around several axons and deposit a basal lamina that surrounds both the iSCs and the axonal bundle [20]. iSCs then penetrate axonal bundles, positioning larger diameter axons in the periphery for radial sorting. A characteristic feature of iSCs is that they secrete autocrine survival factors so that they are no longer entirely dependent on axon-derived Neure-gulin1, present on the surface of axons [10,57].
By E14.5, Sox10 was expressed in scattered cells throughout the DRG, including in the center and periphery, where iSCs and satellite glia are located (Fig 5A and 5B). Sox10 was not coexpressed with NeuN, confirming that it is exclusively labeling glial precursors at E14.5 (data not shown). In addition, Sox10 was expressed in the dorsal (Fig 5A and 5C) and ventral roots (Fig 5A and 5D) and in the exiting spinal nerve (data not shown). In co-expression studies at E14.5, Sox10 was still highly co-expressed with Sox9 (97.1±1.8% Sox9 + Sox10 + /Sox10 + cells; Thus, the major difference between E12.5 SCPs and E14.5 iSCs is the loss of Pax3 expression. Previous studies had detected Pax3 transcripts in SCPs as well as in iSCs, but indicated that Pax3 transcript levels decline in late iSCs undergoing radial sorting, and protein levels were not assessed [43]. In summary, E14.5 iSCs are characterized by the expression of Sox10, Sox9, AP2α, Sox2 and Egr1, as well as glial lineage markers (data not shown), and they differ from E12.5 SCPs in that they no longer express Pax3.
Expression of Schwann cell lineage markers in pro-myelinating Schwann cells
As iSCs develop, they extend cytoplasmic processes that penetrate axonal bundles, helping to distinguish large and small diameter axons. Larger axons are rearranged to the periphery of the bundle, with iSCs associating in a 1:1 proportional manner with these large diameter axons, resulting in radial sorting [20]. iSCs that associate with large diameter axons are a transient population termed pro-myelinating SCs; these are the SCs that will progress towards the myelinating stage (Fig 1E). In addition, a subset of late iSCs persists at E18.5 in association with multiple smaller axons, which are destined to become non-myelinating SCs. Thus, pro-myelinating SCs represent a transient phase in the SC lineage, first appearing just prior to birth, and expanding greatly on the first postnatal day [16].
At E18.5, Sox10 was expressed throughout the DRG, including in the center and periphery (Fig 6A and 6B). In the DRG center, Sox10 + late iSCs and pro-myelinating SCs amalgamated around the growing spinal nerve (Fig 6G and 6H). In dual labeling studies, the only transcription factors expressed at E14.5 that continued to be co-expressed with Sox10 at E18.5 were Sox9 (98.6±1.4% Sox9 + Sox10 + /Sox10 + cells; Fig Thus, three new transcription factors are expressed in E18.5 late iSCs and pro-myelinating SCs: Jun, Oct6 and Yy1. Jun has previously been shown to be expressed in late immature SCs and downregulated with the onset of myelination [41]. Oct6 + SCs were restricted for the most part to the boundary cap or the ventral root and exiting spinal nerve (Fig 6G-6H), most likely representing pro-myelinating SCs. Indeed, Oct6 is well studied for its role as a cell autonomous regulator of SC development [37] and a pro-myelinating SC marker [38], and is also essential for bringing about the pro-myelinating to myelinating SC transition [38]. Finally, Yy1 is important for attaining the myelination phenotype, such that conditional knockdown of Yy1 in SCs results in hypomyelinated nerves with poor expression of the myelin genes MPZ and Pmp22 [36].
At In summary, E18.5 late iSCs and pro-myelinating SCs in the DRG and nerve roots are characterized by the expression of Sox10, Sox9, Jun, Oct6, Yy1 and AP2α (in the DRG only), and the loss of expression of Sox2 and Egr1.
While Egr2 promotes the terminal differentiation of SCs to a myelinating phenotype, Egr1 and Pax3 are considered non-myelinating SC markers [35,44]. Sox9 is also expressed later in neonatal myelinating and non-myelinating SCs [58], and can cooperatively bind the P0 promoter, a mature SC marker [30], suggesting that Sox9 may also function in postnatal SCs. Nfatc4 acts synergistically with Sox10 to activate the expression of Egr2 in the embryonic nerve [45,59], which suggests a later role for Nfatc4 as Egr2 is required for the terminal differentiation of SCs to a myelinating phenotype [35]. Similarly, Oct6 [38] and Yy1 [36] are required for the myelination of peripheral nerves. In contrast, Jun expression is downregulated by Egr2 upon the onset of myelination, and is thus considered a marker of non-myelinating SCs [41].
Thus, the P7 nerve is primarily populated by Sox10 + myelinating SCs that co-express Sox9, Nfatc4, Jun, Oct6, Yy1 and Egr2, differing from E18.5 pro-myelinating SCs by the initiation of Nfatc4 and Egr2 protein expression. Non-myelinating SCs may also be present based on the expression of Jun, but they fail to express Egr1 (at least in the nucleus) and Pax3 at this stage.
In summary, early postnatal SCs are characterized by the continued expression of Sox10, Sox9, Jun, Oct6 and Yy1, the acquisition of Nfatc4 and Egr2 protein expression, and the loss of AP2α expression. In contrast, late postnatal (i.e., adult) SCs initiate Sox2 and Egr1 expression at low levels, and begin to downregulate Jun and Oct6 expression.
Nerve injury triggers adult Schwann cells to recapitulate a unique pattern of embryonic glial lineage transcription factors
Peripheral nerve injury has been suggested to induce SC de-differentiation and an iSC phenotype [11,17]. However, while there are clearly global changes in SC gene expression post-injury [17,40,62], whether a specific embryonic SC stage is recapitulated, and which set of gliogenic transcription factors are deregulated, remains poorly understood. We thus assessed our panel of transcription factors in "repair" SCs by performing a crush injury on the P60 sciatic nerve, and assessing gene expression at P65, 5 days post injury (dpi).
We failed to detect Pax3 protein in Sox10 + SCs 5 dpi (S11J-S11L Fig), even though Pax3 transcripts have been isolated from SCs following acute injury [43]. Similarly, Etv5, which marks satellite glial cells and is downregulated in maturing SCs [42], was not expressed in Sox10 + SCs (S11P-S11R Fig). Finally, AP2α, which is expressed in SCPs and involved in negatively regulating SC maturation [39], was also absent in the distal segment at 5 dpi (S11D-F). In summary, with the exception of Egr2, SCs retain the expression of the core transcriptional program of a SC identity post-injury, including Sox9, Nfatc4 and Yy1. In addition, a 'repair' SC phenotype is characterized by an increase in expression of the SC lineage markers Sox2, Jun and Oct6, but other embryonic SC markers (AP2α, Pax3 and Etv5) are not induced, suggesting that SCs do not de-differentiate to a particular embryonic state.
Discussion
Generation of SCs from NCCs is a progressive process characterized by at least five transient embryonic stages of development. Here we have defined these developmental stages by examining the expression patterns of 12 transcription factors (Sox2, Sox9, Sox10, AP2α, Pax3, Nfatc4, Etv5, Jun, Yy1, Egr1, Egr2, Oct6) (Fig 11). While other studies have performed more global analyses of gene expression in SC lineages between E9.5 and P0 [67], similarly delineating the dynamic changes that occur over these time points, our focus on protein expression levels of a core set of transcription factors provides a framework for scientists to reliably follow temporal identity transitions in this lineage. Based on this temporal profile, we can begin to delineate the potential diverse functional contributions of these genes during SC development and their recapitulation following peripheral nerve injury.
At E9.0, the earliest NCC stage is characterized by the expression of Sox10, Sox9, AP2α, Pax3 and Nfatc4, a gene expression profile that is maintained at E10.5. However, a distinct feature of NCC precursors coalescing in the DRG is the initiation of Etv5 expression and concomitant loss of Nfatc4 expression. Next, E12.5 SCPs in the DRG and dorsal and ventral roots retain the expression of Sox10, Sox9, AP2α, and Pax3 while also initiating the expression of Sox2 and Egr1, but losing the expression of Etv5. This is followed at E14.5 by the loss of Pax3 expression, while Sox10, Sox9, AP2α, Sox2 and Egr1 continue to be expressed in iSCs. A more dramatic change in gene expression is observed in E18.5 late immature/pro-myelinating SCs, which express Sox10, Sox9, Jun, Oct6 and Yy1, but lose the expression of AP2α, Sox2 and Egr1 in the nerve. At P7 and P65, we observed a very similar expression profile as seen at E18.5 (i.e., Sox10, Sox9, Jun, Oct6, Yy1) with the added expression of Nfatc4 and Egr2 at P7 and P65, and Egr1 and Sox2 expression in the P65 nerve.
Following peripheral nerve injury, SCs assume a transient 'de-differentiated' phenotype that is critical for supporting axonal regeneration and nerve repair. Hence, we characterized expression of the transcription factor panel following injury in the adult P65 nerve, comparing it against the embryonic and postnatal profile. We reveal that denervated SCs upregulate the expression of only a subset of early glial-lineage transcription factors. Indeed, expression of genes associated with both SCPs (Sox2, Egr1) and pro-myelinating/late immature SCs (Oct6, Jun) are elevated following denervation of adult SCs, while genes involved in the myelination program are actively lost (Egr2). Importantly, the absence of expression of several embryonic genes in denervated SCs (AP2α, Etv5, Pax3) may provide a strategy to enhance or prolong the repair phenotype to improve recovery of function following PNS injury or neuropathy.
Sustained expression of Sox9 and Sox10 across the Schwann cell lineage
An intricate network of transcription factors controls the timely development of peripheral glia, including several of the transcription factors examined in this study. One of the core regulators of SC development is Sox10, which we used to mark peripheral glial cells in our co-labeling experiments. Indeed, we (this study) and others [26,28] found that Sox10 is continually expressed in Schwann and satellite glial cells throughout development (Figs 1-9) and into adulthood. Prior studies have revealed that Sox10 is required for the specification and terminal differentiation of iSCs, and to maintain a peripheral glial phenotype [28,49]. Mechanistically, Sox10 directly activates Egr2 expression, acting synergistically with Oct6 and Nfatc4 [30][31][32][33][34] to induce the expression of peripheral myelin genes such as myelin basic protein (MBP), myelin protein zero (MPZ), myelin associated glycoproteins and connexin-32 (Cx32) [32,34,45]. Consequently, deletion of Sox10 results in loss of Egr2 expression, as well as myelin sheath degeneration and axonal death resulting in declined nerve conduction [29]. However, conditional Sox10 ablation studies also revealed that Sox10 is essential for survival of early migrating trunk NCCs, but not the survival of adult SCs, indicating that it is a critical player early in the SC lineage [29,68,69] and later to maintain functional myelination.
Interestingly, we also found that Sox9 follows a similar temporal pattern with sustained, overlapping expression with Sox10. Sox9 induces a NCC phenotype [25], and its expression biases migrating NCCs towards glial and melanocyte lineage selection [26]. While previous studies have observed Sox9 expression in the peripheral nerve at E14.5 [58], earlier stages of Fig 11. Summary of temporal expression profiles of key transcription factors in the SC lineage. Sox9 and Sox10 are expressed throughout SC genesis, beginning at E9.0 in migrating NCCs and persisting until P65 in myelinating and non-myelinating SCs in the sciatic nerve. AP2α, Pax3 and Etv5 are also expressed in NCCs, persisting until E12.5 in SCPs with Nfatc4 expression being restricted to NCCs. Egr1 and Sox2 expression is initiated in SCPs at E12.5, persisting until E14.5 in iSCs. At E18.5, AP2α is expressed in the DRG but is undetectable in the nerves while the expression of Jun, Oct6, and Yy1 is initiated and persists till P65. Rare Sox2 + Sox10 + positive cells are also detected in the P65 nerve. Egr2 expression is not detected until P7 in myelinating SCs in the sciatic nerve. Post-injury, high upregulation in expression of Sox2, Oct6, and Jun is observed, while distinct nuclear Egr1 expression is also detected. The straight lines represent continued expression of the markers through the different stages, while the dotted lines represent declining or low expression. The asterisk indicates where expression is restricted to the DRG and is undetectable in the nerves at E18.5. Markers expressed in the nerve after an acute injury have been denoted with ' p '. The green cells represent the developing axon, while the beige cells represent the NCCs at E9.0-E10.5 and the developing SCs at E12.5 to post-natal stages. doi:10.1371/journal.pone.0153256.g011 Sox9 expression have not been documented. Interestingly, recent work has suggested that isolated human SCs show negligible Sox9 expression, but Sox9 is over-expressed in neurofibromatosis 1 tumor derived SCs [70], hinting to a role for Sox9 in promoting SC proliferation. It is possible that sustained Sox9 activation in Schwann cells may enable re-establishment of the immature SC phenotype and re-entry into the cell cycle, and should be addressed in future studies.
Transcriptional regulators expressed at early stages in the Schwann cell lineage
In addition to Sox9 and Sox10, four other transcriptional regulators in our panel were expressed at the NCC stage; AP2α, Pax3, and Etv5, with Etv5 appearing one day later than the others. One novel observation was that Nfatc4, which is a calcium-responsive transcription factor, exhibits transient early expression in NCCs but is rapidly lost by E10.5, before re-initiating expression in maturing SCs at P7. A role for Nfatc4 in early NCCs has not previously been reported, however at later stages, Nfatc4 has been reported to bind a myelin specific enhancer in Egr2, cooperatively with Sox10, to activate Egr2 and other myelin genes during the pro-myelination to myelination transition [45]. AP2α expression persists until 18.5, however, AP2α + Sox10 + cells at this stage are confined to the DRG, and AP2α expression is not seen in the Sox10 + SCs lining the nerve. This could be suggestive of a role for AP2α in satellite glial cells in the DRG. Also, AP2α is co-expressed with majority of Sox10 + cells throughout development, except at E18.5 when AP2α expression becomes restricted to the DRG. Interestingly, overexpression of AP2α in vitro blocks the transition of SCPs to iSCs [39], even though we found that this transcription factor is expressed in iSCs and pro-myelinating SCs. The in vivo requirement for AP2α in the SC lineage has not yet been determined.
Pax3 is co-expressed with most Sox10 + glial lineage cells at early embryonic stages, and may play a role in regulating the proliferation of these early glial cells. Indeed, Pax3 induces proliferation in SCPs, and Pax3 transcript levels decline at the onset of differentiation [43]. Finally, we also found that Etv5 expression is limited to E10.5 NCC precursors in the DRG and in satellite glia cells at later stages. The function of Etv5 in the SC lineage has not been elucidated, although misexpression of dominant negative Etv5 in NCCs affects neuronal and not glial specification [54]. The absence of Etv5 at all later time points of the SC lineage (including after injury) suggest that it does not play a role in SC differentiation or myelination.
Transcriptional regulators expressed at late stages in the Schwann cell lineage
Several of the transcription factors in our panel were not expressed in NCCs, but were expressed in definitive cells in the SC lineage, including Sox2, Egr1, Jun, Oct6, Yy1 and Egr2. In contrast to Sox9 and Sox10, Sox2 is expressed in a more limited window of SC development, appearing in E12.5 SCPs and E14.5 iSCs. A decline in number of Sox2 + Sox10 + cells is observed at E14.5. This data is consistent with previous studies demonstrating that Sox2 expression declines upon neuronal commitment, and continues at low levels in SCPs and iSCs [23]. Interestingly, persistent Sox2 expression suppresses myelin-associated genes such as Egr2 and MPZ whilst maintaining cells in an undifferentiated state [22]. Notably, cross-repressive interactions between Sox2 and Mitf/Egr2 regulate the differentiation of SCPs into either myelinating SCs or melanocytes [24]. Sox2 is thus considered a negative regulator of myelination.
The zinc finger transcription factors Egr1 (Early growth response 1) and Egr2 have nearly identical DNA binding domains but opposite effects on myelination; Egr2 (Krox-20) promotes the differentiation of SCs to a myelinating phenotype while Egr1 is a non-myelinating SC marker [16,35]. Hence, we (this study) and others [35] found that Egr1 is expressed in SCPs but is downregulated as the cells mature. At postnatal stages, Egr1 expression is also re-initiated in non-myelinating SCs [35], where a modest cytoplasmic expression pattern is observed, and is sustained in a subset of SCs within the adult (P65) sciatic nerve. Conversely, Egr2 transcripts are detected in the dorsal and ventral roots from E10.5 onwards but are absent from the SCs in the DRG and peripheral nerves throughout embryogenesis [15]. Strikingly, we did not detect Egr2 protein in the dorsal and ventral roots, suggesting that it may not be translated until postnatal stages. Indeed, we (this study) and others [35] observed Egr2 protein in SCs lining the postnatal peripheral nerve, which is expected considering its requirement for myelination. e observed Jun expression in late immature/pro-myelinating SCs, P7 and in rare Jun + Sox10 + SCs in the adult nerve (P65). Downregulation of Jun expression is mediated by Egr2 just prior to onset of myelination [40]. Overexpression of Jun has been associated with a decline in myelination and de-differentiation of SCs, along with a reduction in Egr2 and MPZ levels [40]. Similarly, we detected Oct6 expression in a subset of Sox10 + pro-myelinating SCs at E18.5, in mature SCs at P7 and in rare cells at P65. Oct6 acts in synergy with Sox10 to induce Egr2 expression, which in turn promotes the expression of several myelin proteins [34,37]. Oct6 deficient mice show a transient arrest at the pro-myelinating stage, which is overcome by P10, with a late onset of Egr2 expression and myelin formation [37]. Oct6 not only promotes myelination by promoting terminal differentiation from pro-myelinating to myelinating SC via induction of Egr2, but also prevents premature myelination by repression of MBP and MPZ. A progressive reduction in Oct6 levels allows MBP and MPZ to be activated, thereby initiating a temporally controlled myelination program. Notably, constitutive overexpression of Oct6 results in a persistent hypomyelination phenotype in mice and gradual axonal loss [71], suggesting that varied levels of Oct6 allow for diverse functional contributions across the SC lineage.
Yy1, expressed at E18.5 and postnatally at P7/P65, is important for attaining the myelination phenotype, such that conditional knockdown of Yy1 in SCs results in hypomyelinated nerves with deficient expression of MPZ and Pmp22 [36]. The Egr2 promoter and Myelinating Schwann cell Element (MSE) has multiple binding sites for Yy1. Activation of MEK pathway occurs in response to the axonal signaling molecule, Neuregulin1. This in turn results in serine phosphorylation of Yy1. The phosphorylated Yy1 is recruited to binding sites in the Egr2 promoter and MSE, activating Egr2 expression [36] and underscoring its important role in SC myelination.
Injury activates SC lineage genes that recapitulate features of both Schwann cell precursors and pro-myelinating Schwann cells
Following a nerve crush injury we observed continued expression of Sox10 in nerve SCs and co-expression with Sox9, Nfatc4 and Yy1. Previous work showed that Sox9 is expressed in isolated SCs from P3 nerves [58] but its presence in adult SCs in vivo or following injury has not been reported. Constitutive expression of Sox9 in adult (uninjured) SCs and following injury in vivo, suggesting that Sox9 may play a continued role in the maintenance of the SC fate or in sustaining SC competence to re-acquire a de-differentiated state, particularly since it has a demonstrated role in both induction and maintenance of self-renewal capacity in subependymal neural stem cells [72] and various epithelial stem cell types [73,74]. Indeed, SC de-differentiation encompasses the hallmark features of a stem cell, exhibiting both the capacity for self-renewal, and the ability to generate mature cell types. Future conditional knockout studies will need to be done to determine the role of Sox9 in adult SCs and its potential contribution to the de-differentiation process.
Acutely injured SCs exhibit robust activation of the myelin-inhibitory gene Sox2, as has been previously reported [22], and elevated levels of Egr1, both of which are unique to SCPs and iSCs and absent in late immature/pro-myelinating SCs (Fig 10). However, several other transcription factors that are upregulated in denervated SCs are markers of late immature/promyelinating SCs, including Jun, and Oct6 (Fig 11). SC de-differentiation was also associated with a concomitant loss of mature myelinating genes such as Egr2.
Egr1 is a transcriptional activator that is normally active during cell cycle-re-entry [75]. Although the frequency of Egr1 + Sox10 + SCs did not change, Egr1 protein exhibited a marked increase in intensity and nuclear translocation following injury, suggesting that denervation causes a change in Egr1 function [35]. Egr1 may thus be an important modulator of SC plasticity. Egr1 and Egr2 appear to play opposing roles in modulating the acquisition of non-myelinating versus myelinating phenotypes [35]. Sustained Egr1 expression in a subset of SCs at all postnatal stages, and its known role in regulating cell cycle entry, suggests that this factor might also enable SC proliferation after injury and/or activation of other genes that are necessary for de-differentiation, partly through its known cooperation with Egr3 [76]. Future studies using conditional knockout approaches will need to be done in order to determine the ultimate role for Egr1 and Egr3 in acquisition of the de-differentiated SC state.
Interestingly, the frequency of pro-myelinating genes Yy1 and Nfatc4, did not change following injury further indicating the retention of late immature/pro-myelinating SC traits. Despite their putative pro-myelination function, many Yy1 + and Nfatc4 + cells that co-localized with Sox10, were also mitotically active, suggesting that both of these transcription factors are permissive of the proliferative, de-differentiated state.
The most notable change after injury was observed with respect to Jun and Oct6 expression. A robust increase in Jun expression has been reported in denervated SCs [40,62]. Indeed, Jun is a critical regulator of de-differentiation such that loss of Jun results in an inability to downregulate myelination genes and severely impairs regeneration. The POU domain transcription factor Oct6 initiates the transition from pro-myelinating to myelinating SC [77]. Interestingly, a shift from cytoplasmic to nuclear localization in Oct6 expression has been reported in axonal regeneration in peripheral neuropathic conditions [78]. It is noteworthy that Oct6 expression is highly upregulated at a time when myelin is being degraded and SCs are repressing their myelin program. Oct6 could be playing multiple roles in governing SC function, a possibility that requires further exploration. Since neither Jun nor Oct6 are expressed in SCPs but only by late immature/pro-myelinating SCs at E18.5, it suggests that at the level of protein expression, the de-differentiated SC state cannot be equated to a single embryonic SC stage, but rather is unique and includes features from multiple developmental stages.
Several key transcription factors including AP2α, Etv5, and Pax3 that are associated with early stages of development were not expressed within denervated SCs. We observed only extremely rare Pax3 + cells in the adult nerve (data not shown). A recent report suggests that Pax3 labels approximately 1% of cells in the adult nerve and marks non-myelinating SCs [44]. This suggested that possibly only a subset of Sox10 + non-myelinating SCs express Pax3. Notably, we did not observe Pax3 expression following injury either, despite seeing robust expression in NCCs that were immunostained in parallel as a positive control. This is in contrast to a previous study [43] that reported Pax3 transcripts were detected in the denervated distal stump at seven days post transection injury. This discrepancy may be due to several factors: 1) the temporal expression profile of Pax3 may be delayed, such that it does not peak until after the 5 day time-point we examined, 2) severity of nerve injury (transection versus crush) may be an important determinant of the SC transcriptional response within denervated SCs, 3) young mice (3 weeks of age) may elicit a different response compared to the adult animals (P65) used in our study, and 4) Pax3 transcripts may not be translated. Future studies using a conditional Pax3 gene deletion could determine its role in establishing the SC repair phenotype after injury.
Our spatio-temporal expression study provides a comprehensive glimpse into the expression profiles of the various transcriptional regulators involved in SC development and in SC injury response. To summarize, the injured peripheral nerve contains a highly dynamic and heterogeneous population of glia that undergoes phenotypic reversion to a de-differentiated state by recapitulating a subset of early glial-associated transcription factors. Taken together, we provide evidence that "repair" SCs retain their core SC transcriptional program, while initiating the expression of a subset of embryonic genes that represent several embryonic SC stages. Since SC function is diminished in the adult aging PNS it may be necessary to artificially activate additional genes, particularly those that fail to be re-activated or are diminished following prolonged denervation, in order to maximize nerve regeneration. | 9,322.6 | 2016-04-08T00:00:00.000 | [
"Biology"
] |
eu ON APPROXIMATE AND CLOSED-FORM SOLUTION METHOD FOR INITIAL-VALUE WAVE-LIKE MODELS
This work presents a proposed Modified Differential Transform Method (MDTM) for obtaining both closed-form and approximate solutions of initial-value wave-like models with variable, and constant coefficients. Our results when compared with the exact solutions of the associated solved problems, show that the method is simple, effective and reliable. The results are very much in line with their exact forms. The method involves less computational work without neglecting accuracy. We recommend this simple proposed technique for solving both linear and nonlinear partial differential equations (PDEs) in other aspects of pure and applied sciences. AMS Subject Classification: 35C05, 35C07, 74H10, 76D33
Introduction
Wave equation is a second order Partial Differential Equation (PDE) used in the description of waves.It is of immense application in applied Mathematics, Engineering and Physics.Wave equations can be linear, or nonlinear initial-boundary value problems.A variety of numerical, analytical and semianalytical methods have been developed and proposed to obtain approximate, and accurate analytical solutions of various forms of differential equations in literature.Some of these methods include: Homotopy Perturbation Method (HPM), Homotopy Analysis Method (HAM), Adomian Decomposition Method (ADM), Variational Iteration Method (VIM), Differential Transform Method (DTM) and so on [1]- [8].
DTM is an iterative process that is based on the expansion of Taylor series.It was first proposed by Zhou in 1986 when he used it to solve linear and non-linear initial value problems in the analysis of electric circuit [9].DTM in most cases, provides analytical approximation, and exact solutions in rapidly convergent sequence form.Despites these advantages, many researchers have improved and modified the DTM for better results and applications [10][11][12][13][14][15][16][17][18].
The MDTM is useful in obtaining exact and approximate solutions of linear and non-linear differential systems.It has been used by several authors to solve different systems easily and accurately.
The main idea of this work is to use the modified DTM to solve some wave-like PDEs by considering both cases of constant and variable coefficients.
2. Notion and Basic Theorems of the MDTM, see [15], [17], [18] Let m (x, t) be an analytic function at (x * , t * ) in a domainD, then in considering the Taylor series ofm (x, t), regard is given to some variables s ov = t instead of all the variables as in the classical DTM.Thus, the MDTM of m (x, t) with respect to t at t * is defined and denoted by: Thus, we have: The equation (??) is called the modified differential inverse transform of M (x, h) with respect tot.
Illustrative and Numerical Examples
Here, we apply the proposed method to the following problems.
Cases 1 & 2 {Wave-Like Models with Variable, and Constant Coefficients}
Case Problem 1.Consider the wave-like model with variable coefficients: subject to the initial conditions: Solution procedure to Case Problem 1. Taking the modified differential transform (MDT) of both sides of (1),we get Corresponding to (3) is the recurrence formula (4) with initial conditions in (5): Using ( 5) in ( 4) gives the following components: In general, we have: Substituting ( 6) and ( 7) into the solution series, we have: Equation ( 8) is the closed-form solution of case problem 1.
Case Problem 2. Consider the wave-like model with constant coefficients: subject to the initial conditions: u (x, 0) = 0 and u t (x, 0) = 2 sin x.
Solution Procedure to Case Problem 2. Taking the modified differential transform of both sides of (9), we get: Corresponding to (11) is the recurrence formula (12) with the initial conditions in (13): Using ( 13) in (12) gives the following: In general, we have: Substituting these into the solution series, we have: Equation ( 16) is the closed-form solution of case problem 2.
Discussion of Results
In this subsection, we will present graphs for the exact and the approximate solutions for discussion of results.The approximate solutions contain terms up to the power of seven (7). Figure 1 and Figure 2 for exact and approximate solutions of case problem 1 in that order.
Concluding Remarks
In this work, we solved initial-value wave-like models with variable, and constant coefficients for obtaining both closed-form and approximate solutions.For this, we used a proposed solution technique: a modified differential transform method (MDTM).Our results when compared with the exact solutions of the associated solved problems, showed that the method is simple, efficient, effective and reliable.The results are very much in line with their exact forms, even without neglecting accuracy.We therefore, recommend this solution technique for solving both linear and nonlinear partial differential equations (PDEs) in other aspects of pure and applied sciences. | 1,072.4 | 2016-10-01T00:00:00.000 | [
"Mathematics"
] |
Ranking different cities and villages in province of Isfahan based of analysis of social indicator during the statistic period 1996-2006
Article history: Received April 17, 2012 Accepted 11 June 2012 Available online June 12 2012 One of the primary governmental concerns is to understand the level of economical and social development in different locations of a country. Understanding social and economical characteristics of different cities help provide necessary assistance for underdeveloped areas and promote some value added activities such as tourism in better-developed areas. In this paper, we present an empirical study based on Factor Analysis to rank different towns, villages and cities located in city of Isfahan, Iran in terms of various socio-economical criteria. The study gathers necessary information from 1996 to 2007. The results of our survey indicate that three cities of Feridan, Nayeen and Falavarjan are in the best position in terms of different social and cultural criteria while Lenjan, Barkhar and Mymehand Isfahan are located in the worst positions. © 2012 Growing Science Ltd. All rights reserved.
Introduction
Although talking about culture has a long historical background, the subject of culture development has become a part of country and social development in recent decades.When talking about economic development and politics development, eking should be attending to culture politics (hadadian1996).The indicators of culture development were first mentioned in 1967 in a panel discussion containing experts of 24 of countries in UNESCO until study about culture politics.Since then, there have been different works associated with factors influencing socio-economic development.Gukalova et al. (2009) highlighted the characteristic features of the socio-geographical technique in studying the "quality of life of population".They determined the most important parameters, which are responsible for the quality of life in these countries and presented some experience in evaluating the balanced development of Ukraine and Russia.
Human migration is one of the conflict constellations in regions influenced by climate change, but can also contribute to climate adaptation.Migrant social networks can help us build social capital to increase the social resilience in the communities of origin and trigger innovations across regions by the transfer of knowledge, technology, remittances and other resources.This also helps absorbs more people as tourist in the regions.Gaughan et al. (2009) presented some models to investigate the impact of tourism forest conversion, and land transformations in the Angkor basin, Cambodia.Ryashchenko and Gukalova (2010) explained that public health in the system can be a major indicator of the quality of life in countries like Russia and Ukraine.They discussed the methodological issues of using the public health indices as an assessment of the quality of life in regions of Russia and Ukraine.They also investigated the relationship between the notions "public health" and "quality of life" and outlined the techniques of comparative assessment of public health at the level of the countries and their regions.The influence of the public health level on international ratings of Russia and Ukraine was studied in terms of the potential of human development.They also presented the factors and conditions of the spatial differentiation of public health indices in some regions of Russia and Ukraine.Kancs (2011) explored labor migration in the enlarged EU by adopting the Krugman's framework of the New Economic Geography.They studied determinants of labor migration, such as market potential, wages, cost of living on one hand, and labor migration on the other hand, simultaneously, which helped address important issues facing the traditional reduced form studies.They reported that European integration would trigger labor migration between and within the Member States of the enlarged EU.Su et al. (2012) qualitatively investigated urbanization influences at an eco-regional scale by analyzing landscape pattern and ecosystem service value changes in four eco-regions in China.Their results indicated that the four eco-regions exhibited a similar urbanization process of rapid population growth, economic development and urban expansion.They reported that the considerable urban expansion led to a loss of 8.5 billion RMB yuan ecosystem service values per year on average between 1994 and 2003 and found that landscape fragmentation, configuration and diversity, which were induced by urbanization, could substantially impair the provisions of ecosystem services.Their results emphasized the importance of joint application of landscape metric analysis and ecosystem service values assessment in landscape planning.
Dennis Wei and Liefner (2012) explained the drastic rise of China in foreign investment, export, and ICT production.They explained that the research on China was embedded in China's reform process, as well as theoretical development in economic geography through a comprehensive review of the literature on globalization, industrial restructuring and regional development.Hewitt and Escobar (2011) explained that urban development was an intensive and poorly controlled issue in Spain and they warned serious concerns for sustainability in the territory.They suggested that to move towards a more sustainable configuration, it is necessary to acquire the implication of all stakeholders in the Madrid region.They presented a framework for implementation of sustainable development initiatives through sustainability action groups, in which integrated land use techniques and participatory planning activities were used to test and develop new policy initiatives.Transportation plays an important role on urban development and air transportation is considered as one of the most important items in some areas.Fenley et al. (2007) investigated whether air transport must be considered seriously as a major transport option for the sustainable development of Amazonas.Scheffran et al. (2012) investigated possible opportunities, innovative methods and institutional techniques for migration as a contribution to climate adaptation.They used the Western Sahel as a case study region, with a focus on Mali, Mauritania and Senegal, using quantitative and qualitative analysis of remittances at the national level, and a micro-level analysis on the role of migrant networks in these countries in specific co-development projects in water, food and energy.
In this paper, we present an empirical study to rank different regions of city of Esfahan in terms of their socio-economical features.The organization of this paper first presents details of the survey in section 2, section 3 explains details of our findings and concluding remarks are given at the end to summarize the contribution of the paper.
The proposed study
Social development is one of the most important components of development of any country.Social development has different meaning from various point of view such as passing traditional society to modern or industry by division of affaires or social invest, human identity rational, communicative, trust, etc.In Iran this concept is more in accordance with identity and trust.Social development in 1995 was introduced as defeating with poverty.The indictors of development include reaches to educational services, percent of educated population, life length of time the ratio of death, appropriate nourish sanitation level and controlling of diseases and etc shelter suitable, equal between men and women, etc.
Human development is one of the most important issues and is confronting with poverty and injustice that should be provided by social policies.Social development with division of social living in to 4 parts of political, economical, culture and social has become more important.The focus of social development is a society that according to it all the political, economical, culture and social dimensions has been taken to account either micro or macro level.Development is a very complicated process where society transfers from a historical era to another new one.
The province of Isfahan has always been one of the pioneers in development of Iran.Regional analysis specially, in balanced development and factor of different regions in territory of Isfahan province indicates the imbalance in development and social justice.This province maintains the following characteristics.
Social -cultural development and its indicators
This research is about analysis of changes of one of the most important dimensions of development, which is social development or social-cultural development.In fact because of the very interference of this development we use the combination word social-culture word.Social-culture development means the change in following aspects: 1-populational issues, 2-educational issues, 3-occupational issues, 4-matrimonial issues, 5-treating -sanitation issues, 6-book and book reading, and 7-press and media.
Methodology
The methodology is quantitative -analyzing the statistic group of research are the cities at Isfahan.The needing data are accumulated (gathered) from general counting of people and domicile of 1996 and 2006 and by using Factor analysis (FA) we analyze social cultural indicators in the cities located in province of Isfahan.The primary objectives are to analyze the social, cultural indicators in the cities located in the province, to reduce the number of items using FA and finally to rank all cities based on the criteria.FA is considered as a statistical method used to explain variability among observed, correlated variables in terms of a potentially lower number of unobserved components called factors.In other words, it is possible, for instance, that variations in five or six observed variables mainly reflect the variations in fewer such unobserved variables.FA looks for such joint variations in response to unobserved latent ones.The observed variables are formulated as linear combinations of the potential components, plus "error" terms.The data gained about the interdependencies among observed variables are used later to reduce the set of variables in a dataset (Fabrigar et al., 1999).
Characteristics of the province of Isfahan
Isfahan province with area of 107090 km 2 and more than 4559256 population is located between 30 and 43 min based on last statistical information in 1386 this province had 22 cities , 45 parts , 46 towns, 124 villages and its centre was Isfahan.Fig. 1 shows the location of this province in Iran.
Fig. 1. The province of Isfahan
Table 1 shows the number of towns, parts, city and villages at Isfahan state based on country division.As we can observe from the results of Table 2, Chi-Square represents a meaningful value, which validates the results of our FA analysis.Table 3 shows details of the criteria used for analyzing the cities and towns including cinema, salon, etc. and the associated correlation numbers assigned to them.We have also use principle component analysis (PCA) to find important factors influencing development of cities.Our PCA implementation yields a total standard deviation of 7.485 where the first factor with 3.5 maintains the most influencing effect.Fig. 2 shows details of our findings regarding PCA analysis.In summary, the results of our analysis indicates that cinema, number of saloons, number of theatre and musical places, painting, public library, brain nourish for infant and juveniles are the first six important factors, respectively.In addition, the existence of university, items of arts and culture for export, local press, Islamic places blessedness, mosque, hosineey (a kind of religious place), religious places, number pious foundation and residence installation come in other orders, respectively.Fig. 4 shows details of our classifications.
The results
In order to find out the relative importance of each criteria, we should inspect separately indicators of each part on illiteracy rate, which is an important indicator in social development.There are not much differences between illiteracy rates in different cities due to expending education centres and learning in all of the country, nevertheless, the largest literate rate change is 6.45% for Semirom and the least is 1.55% for Falavarjan and the ranking of the cities are summarized in Table 4.
Table 4
The ranking for different cities in terms of the growth level in literacy (1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006) Increasing in the rate of literacy in the level of cities of province, on one hand is because of old illiterates people and on the other hand is because of expansion of educational services.There are some reasons to encourage people to learn more.Among these, we can refer to percentage indicator of accessibility to newspaper in different cities.The best one is Natanz with 235.94% and the least one is Golpayegan & Feridan with -100% (See Table 5).
Table 5
Cities sate Isfahan ranking about percent accessibility to bulletin (1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006) Prediction about status of mass media is complex because against the number of statistic we face to culture case, for example if public library, book, cinema, and etc. would increase we would not be able to say the culture is improved.Finally after using the factor analysis we calculate the statue of cultural and social development in cities and the result is that the worst condition is in Isfahan and the best condition is in Feridan and other cities are summarized in Table 6 as follows.Note that the ranking does not mean that Faridan is the best facilities and Isfahan is the worst one because these variances have improved in 1996-2006.In fact, when we calculate and compare indicators of 1996-2006 we receive the information of Table 7 as follos.
Conclusion
In this paper, we have presented an empirical study to measure the relevant importance of various cities in terms of different criteria including population growth rate, literate rate, newspaper access, public library, cinema, number of salon, etc..The proposed study used Factor Analysis to find important components influencing development of different villages and cities located in province of Isfahan, Iran.The study considered 15 criteria and applied for 17 cities in this province.The results of our study indicate that Kashan was in the best position and Falavarjan was in the worst position.
Of course, we should pay attention to population variances specially population emigration, which means that some of cities have the high facility but due to the population growth and emigration they have found the best conditions.
Fig. 2 .Fig. 3 .
Fig. 2. Standard deviation of different components Analysis of variance (ANOVA) test was another technique to examine different impacts of various factors.Fig. 3 shows details of our finding where the first factor represents 80.1% of variance of factors.
Table 1
CharacteristicsData matrix for technique of FA should have meaningful information and we use KMO Bartlett's test to verify whether the data represent meaningful numbers.Table2shows details of our findings.
Table 3
Coloration matrix socio-culture variance in cities located in province of Isfahan | 3,173.8 | 2012-10-01T00:00:00.000 | [
"Economics",
"Geography"
] |
Application of the Vignette Technique in a Qualitative Paradigm
Vignettes are short depictions of typical scenarios intended to elicit responses that will reveal values, perceptions, impressions, and accepted social norms. This article describes how vignettes were developed and used in a qualitative linguistics anthropology study to elicit those responses as experienced by mixed-heritage individuals in attaining heritage legitimacy despite their inability to speak their heritage languages. The vignettes were administered during in-depth, semi-structured interviews. Eight participants were asked to reflect and respond to prompts which revolved around typical experiences where speakers were limited by their lack of heritage language proficiency. Based on the vignettes, the participants described how the speakers would linguistically strategize to compensate their limited abilities in using the heritage languages. At the same time, the cultural means through which speakers gain legitimacy within their own heritage groups were also identified. Essentially the use of the vignettes facilitated in generating data that would have otherwise been challenging to elicit given the culturally sensitive as well as highly private nature of the phenomena under investigation. The application of vignettes provided a less intrusive and non-threatening way of obtaining perceptions, opinions, beliefs and attitudes based on responses or comments to stories depicting lived experiences of the participants that the researcher is otherwise not privy to as an observer. However, application of this data elicitation technique can prove challenging for the researcher. A critical analysis of the development, implementation and validity of vignettes as a research tool is extrapolated here within the setting of a heritage legitimacy study as an exemplar.
INTRODUCTION
This article explores the use of vignettes as a data elicitation technique employed in a qualitative paradigm for an anthropological linguistics research investigating mixed-heritage individuals claiming heritage legitimacy.This examination is of particular interest in that it highlights the significant potential of using vignettes in place of participant observations for culturally sensitive research contexts that are also regarded as highly private in nature, such as the heritage legitimacy study.Drawing from the research by Mahanita (2016) on mixedheritage people claiming heritage legitimacy, the development, applications and validity of using vignettes are examined and discussed in the following sections.This examination is significant as studies in using vignettes in the developing multi-ethnic world are emerging, but with no critical examination of their usefulness in such settings (Gourlay et al., 2014;Mahanita, Nor Fariza & Hazita, 2016).Thus this article will contribute towards this emerging
BACKGROUND
The literature has described vignettes as short stories or concrete scenarios and examples of situations, people or individuals and their behaviours that are written about or pictorially depicted in specified circumstances (Finch, 1987;Hazel, 1995;Hall, 1997;Hughes, 1998;Renold, 2002;Wilt, 2011;Braun & Clarke, 2013).The employment of vignettes as a data elicitation technique encourages articulation of perceptions, opinions, beliefs and attitudes from participants as they respond to or comment on the concrete scenarios and situations as depicted.The vignette has been found to be most useful for especially potentially difficult topics of enquiry as it is non-personal and perceived as less threatening (Barter & Renold, 2000;Hughes & Huby, 2002;Wilks, 2004).Most often, vignettes are employed together with other methods like interviews and focus groups in qualitative studies.
However, prior to this, the vignette was typically used in quantitative designs for health sciences, social work and psychology based studies.As a quantitative tool, it was usually presented with a series of predetermined responses with assigned values enabling respondents to rate a particular response (Wilks, 2004).Interestingly, many researchers found that the quantified data elicited from vignettes limited the potentials of the vignettes in generating information which are far richer and more complex.This realization persuaded them to lean towards qualitative paradigms when utilising vignettes for data elicitation (Barter & Renold, 2000;Landau, 1997;Kugelman, 1992).Discussions pertaining to vignettes as a quantitative elicitation tool nevertheless is beyond the scope of this article and can be accessed elsewhere (See Hughes and Huby, 2002;Wilks, 2004).
Increasingly, the use of vignettes is recognised to be most valuable for qualitative designs in place of naturalistic research approaches done through observations where researchers either situate themselves as a participant observer or non-participant observer (Wilks, 2004;Mahanita, Nor Fariza & Hazita, 2016).This is because there are major ethical and practical problems that accompany such an anthropological approach, compounded by potential observer effects.Previous studies in social work for example (Braun & Clarke, 2013;Wilt, 2011;Landau, 1997;Wilson & While, 1998;Kugelman, 1992) demonstrated how qualitative vignettes-based studies of ethical factors and dilemmas elicited richer and deeper understandings of the problem that are not captured in quantitative paradigms.Hence, the use of vignettes in research evidently can offer new possibilities in generating more meaningful and insightful understandings of complex qualitative relationships.
Interestingly, it is important to note that in the field of anthropological linguistics, a survey of the literature has revealed that there is currently no known published research on the use of vignettes in qualitative paradigms as an elicitation tool.Thus, the study by Mahanita (2016) investigating heritage legitimacy of mixed-heritage individuals using vignettes in place of participative observation is innovative in the aforementioned field, as it provides an alternative valid method to facilitate gathering of reliable data.
A range of social science literature, albeit limited, about the use of the vignette technique in qualitative research have claimed that vignettes technique is a useful and insightful way of eliciting perceptions (Jenkins et al., 2010;O'Dell et al., 2012;Gourlay et al., 2014), beliefs and meanings especially for sensitive issues of inquiry that may not be accessible through other methods.However, there remain methodological concerns and challenges that need to be considered.Of particular importance is the internal validity and reliability of the vignettes in relation to their appropriateness, relevance, and realism, to ensure the interpretations and responses they elicit reflect actual behaviour.Hence, the application of vignettes in the study on heritage legitimacy highlighted in this article is an examplary illustration that elucidates the development, construction and internal validity processes of the vignettes used.A description of the application of vignettes in the context of the study on heritage legitimacy is described below.An account of the study's objectives and its research methodology design provide a context wherein the choice to use, the process in developing and validating the vignettes, as well as the procedures entailed in using them to elicit responses, is explicated.
VIGNETTES IN THE HERITAGE LEGITIMACY STUDY
There is an incremental interest in sociological and anthropological studies on aspects regarding the identity of mixed-heritage people in relation to their heritage languages.In the field of linguistic anthropology, a growing area of research focuses on mixed-heritage people and factors influencing the development of their identity (Renn, 2008).Among the factors that have been established to be significant are family, cultural knowledge, physical appearance; peer culture and acquisition of the heritage language (Khanna, 2004;Wallace, 2001).
Family has been identified as one of the important factors that create an impact on a mixed-heritage person's social identity (Yancey & Lewis, 2009).Close contact with family is vital not only for building a bond with members of the family but also for an individual's development in heritage language and culture.
A family network is made up of the immediate or nuclear family as well as the extended family, comprising grandparents, aunts, uncles and cousins.In fact, among mixedheritage people, the development of intimate interaction and a sense of belonging to their particular heritage groups begin with the relationship and interaction that they have with their extended maternal and paternal family members (Rockquemore & Brunsma, 2002).This is supported by Wallace (2001, p. 87 as cited in Mahanita, 2016) who reiterated that the identity of a mixed-heritage person is shaped by these initial social interactions, impressions of and networking with single-heritage family members who are role models representing their respective heritage groups.
On the other hand, some studies have also shown that mixed-marriage families receive very little support from their single-heritage family network and society due to discrimination, rejection and stigmatization (Yancey & Lewis, 2009).These families also experience more conflict due to cultural differences as compared to those of same ethnic/race marriages.Hence, such rejection from and conflict with their single-heritage families, may cause some mixed-heritage individuals to lose contact with the heritage groups of their parents and are cut off from any linguistic or cultural exposure.There are also cases where the rejection from one heritage group results in assimilation towards the other (David, 2008).
Cultural knowledge is the second factor that influences the identity of mixed-heritage people.Wallace (2001) found that the participants in her study referred to elements such as choice of food, customs, traditions and festive celebrations when they were asked about heritage group membership.The extent of a mixed-heritage person's knowledge of heritage group culture depends on what they have learnt from the interactions with their parents or maternal or paternal relatives (Renn, 2008).Some may have extensive cultural knowledge of both maternal and paternal heritage groups whereas others may have knowledge of only one of their heritage groups or there are even those with no such knowledge at all.Physical appearance or phenotype is the third factor that influences identity development in a mixed-heritage person.Physical appearance here refers to skin tone, hair colour or texture, shape of eyes and nose.Mixed-heritage people have reportedly encountered ignorance, disbelief, condescension and hostility from members of the society that they live in, just because their phenotype is a mismatch of what they claim to be (Pao, Wong & Teuben-Row, 1997;Romo, 2011).In addition, they also have to deal with the uncomfortable and provoking question, "What are you?" usually asked at the beginning of a conversation by those who are unable to categorise their ambiguous phenotypes as belonging to existing ethnic or racial groups within the society (Mahanita, 2016, p. 56).
Peer culture is another major factor that shapes the identity of mixed-heritage people.The availability of other mixed-heritage people in the surrounding community provides the much needed social support for mixed-heritage individuals (Renn, 2008;Rockquemore & Brusnma, 2002) in dealing with resistance, rejection and discrimination from single-heritage peers.This in turn, promotes the development of a separate mixed-heritage identity such as the one identified as multiracial identity (Root, 2001;Renn, 2008) where mixed-heritage individuals identify with other fellow mixed-heritage individuals.
In order to be able to claim legitimacy in their heritage groups, another aspect that is equally important is the ability to speak the heritage language (Pao, Wong & Teuben-Row, 1997;Shin, 2010;Renn, 2008;Wallace, 2001;Yancey & Lewis, 2009).According to Wallace (2001, p. 67) language is not only an "essential" dimension of a mixed-heritage person's identity, but also plays an important role in their daily interactions with family members and peers.Equipped with the ability to speak their heritage language as well as knowledge of their heritage culture, mixed-heritage people feel more confident to identify themselves as a part of their heritage groups (Renn, 2008).This is because being able to understand the nuances and subtleties embedded in their heritage languages and cultures gives them a feeling of rootedness within their heritage groups.
However, as mixed-heritage people who are unable to speak and understand their heritage languages are increasingly becoming the norm (Pao, Wong & Teuben-Row, 1997;Shin, 2010;Wallace, 2001), more research is needed in order to understand how they cope and improve their daily communication (Remedios & Chasteen, 2013) with their singleheritage family members.Soliz, Thorson and Rittenour (2009) assert that not much is known about the role of language and how it is used by mixed-heritage people in communicating with their family members.Shin (2010) concurred that research on mixed-heritage people from the linguistic perspective is still lacking.
In the Malaysian context, past studies on mixed-heritage people have only focused on the displacement of heritage languages and shifts to dominant languages such as Bahasa Malaysia and English that take place in their families (Soo, Chan & Ain Nadzimah, 2015;Lee, King & Azizah, 2010;David 2008;David & Nambiar, 2002;Kow, 2003).However, to the best of our knowledge, there are no published studies that investigate the scenario that takes place after the process of language shift.As such, a study as the one reported in this article is pertinent and propitious as it sheds light on the types of linguistic resources and strategic competences that the mixed-heritage individuals employ when communicating with their maternal and paternal families, in their endeavours to attain heritage legitimacy (Mahanita, Nor Fariza & Hazita, 2016;Mahanita, 2016).
Essentially this study explored the perceptions of the mixed-heritage individuals regarding their inability to speak and understand their heritage language(s) in relation to claiming legitimacy within their heritage groups (Mahanita, 2016).These perceptions are investigated based on the essentialist theoretical perspective on language and identity which posits that mixed-heritage groups commonly associate sense of self to the ability to speak their heritage languages; and inability to do so, disqualifies them from identifying with their heritage groups (Lanza & Svendsen, 2007;Bucholtz & Hall, 2004;Saville-Troike, 2003;Spolsky, 2001;Romaine, 2000).Relatedly, the ways in which they utilise their linguistic repertoire and strategies to compensate their inability to speak and understand their heritage languages reveal these challenges.The compensation strategies applied may include any other cultural means by which they attempt to accentuate their claim for legitimacy within their heritage groups.
For the aforementioned study, these challenges are mainly revealed through scenarios of communication.At the same time perceptions of their maternal or paternal family members in regards the legitimacy of the mixed-heritage individuals as members of the heritage groups provided another dimension for analysis.Given the aim of the study, ideally it is necessary for the phenomenon to be investigated through participant observation wherein the communication circumstances unfold through series of family related interaction events.This require an intrusive methodology as the researcher, as an outsider, would require permission to be included in the family realm on a daily basis or for selected occasions.Hence, for the researcher, an insight into these series of situations is nearly impossible to be participant to as they unfold or occur within the mixed heritage individuals' lived experiences.
Moreover, key to any research in linguistics anthropology is the recruitment of willing participants.Even so, these 'willing' participants establish borders that limit the researchers' access into their family spaces, forcing the research to be satisfied with third party perspectives.This was the case with many studies on mixed-heritage individuals and their related family members.The literature has shown that previous related anthropological linguistics researches on mixed heritage, language and identity (Remedios & Chasteen, 2013;Khanna, 2004;Khanna & Johnson, 2010;Romo, 2011;Kow, 2003;David, 2008) typically resorted to using questionnaires, secondary data, and in-depth interviews, respectively, mainly because it is very rare that researchers are granted access as participant observers in the families' realm.Seeking an alternative method that may provide richer insights into the phenomena, Wilt (2011) suggested using vignettes as a data collection technique in place of participant observation and the static questionnaires.
METHODOLOGY OF EMPLOYING VIGNETTES IN THE STUDY
The mixed-heritage study (Mahanita, 2016) discussed as an exemplar here adopted a qualitative paradigm applying a multiple embedded case study design.A multiple-case study examines several cases for the purpose of understanding their similarities and differences as well as increasing the reliability of its findings (Baxter & Jack, 2008).Meanwhile the embedded units incorporated within each of the case provided a detailed understanding of the issue of mixed-heritage individuals claiming legitimacy within their heritage groups (Yin, 1994).This was made possible as the embedded units of each case comprised a mixedheritage participant and a single-heritage participant who are related to one another.
Purposive sampling was employed to recruit four mixed-heritage participants (Khanna, 2004;Tan, 2012) who are unable to speak and understand their heritage languages.The participants are aged between 21-42 years old and live in the Klang Valley.Their parents are from various ethnic backgrounds whose first languages include Tamil, Telugu, Sundanese, Thai, Bidayuh.In addition, another four single-heritage participants who are either the maternal or paternal family members (of the mixed-heritage participants) were also recruited.They are aged between 49-66 years old and represent four different ethnic groups.These single-heritage participants were included in this study because they are considered as representatives of their heritage groups.Their perspectives were sought regarding the legitimacy of the mixed-heritage participants as members of their heritage groups (Mahanita, 2016).
Additionally, as it is important for a case study to incorporate the use of multiple sources of information (Creswell, 1998), this study employed triangulation of data obtained from semi-structured interviews, fieldnotes and vignettes.The semi-structured interview data consisted of family heritage background, mixed-heritage identity linguistic repertoire as well as other cultural means accentuated in their efforts to claim legitimacy within their heritage groups.Additionally, the fieldnotes provided descriptions of participants' behaviour, emotions, frame of mind; as well as effects of the setting on the participant, time, location and quality of recording (if any) written down by the researcher during the interview session.
Meanwhile, data from the vignette responses comprised the languages and communication strategies used by the mixed-heritage participants in communicating with their single-heritage family members.These data then were corroborated with data obtained from the semi-structured interviews with their single-heritage family members (i.e.maternal or paternal relatives) who provided perspectives representative of members of the heritage group.A more in-depth account of the said research and its findings can be found elsewhere by Mahanita (2016) and Mahanita, Nor Fariza and Hazita (2016).
The following section will elaborate further the rationale in choosing the vignette technique for the aforementioned study, including the development of the vignettes and their validity processes.Relatedly, of immediate concern in relation to validity and reliability of the methodology is the extent to which self-reported data elicited as responses to a vignette are accurate and credible (Creswell, 1998) or authentic and trustworthy (Lincoln & Guba, 1985).At the same time, it is important to note that the description provided with regards the use of the vignette technique described in this article is situated in the context of linguistics anthropology research such as heritage legitimacy studies.Hence, it is recognized that the use of vignettes may differ in terms of purpose as well as process of its applications for other types of studies.Nevertheless, of particular relevance are the steps explicated in this study that are intrinsic to the validity of the vignettes as reliable scenarios representing mixedheritage lived experiences from four ethnic groups in Malaysia.Harwood, Soliz and Lin (2009) and Wilt (2011) stressed that many studies on multiracial families have relied heavily on observations, self-reported data and interviews for data collection.In the same token, the qualitative data of the aforementioned study is mainly derived from participants' self-reports (i.e.introspection and retrospection) of their mixedheritage experiences, albeit an alternative method.This alternative method gathered selfreport data from the mixed and single-heritage participants through structured interviews and vignette methods instead of the traditional observation method or participant observer technique.
USE OF VIGNETTES IN THE STUDY
This method was employed because the participants rejected the use of the observation method, citing it as being extremely intrusive (Mahanita, 2016).The participants preferred the vignette technique as it helped to maintain a comfortable distance with the researcher when discussing sensitive matters from a third-person point of view.By doing so, it becomes less-threatening for them as compared to talking straight-forwardly about their personal experiences (Braun & Clarke, 2013;Mahanita, Nor Fariza & Hazita, 2016).Moreover, the mixed-heritage participants felt more comfortable in revealing sensitive matters and sharing past experiences of frustration, exclusion or rejection with regards to their inability to speak and understand their heritage languages.
Even so, as with any self-reported data, the possibility that the participants only verbalise what they remember or are willing to partially share their experiences but withhold the rest, may jeopardize the authenticity of the responses prompted by the vignettes.Additionally their responses may contain discrepancies between what they say they would do and their actual behaviour in real life (Carlson, 1996).To avoid this methodological issue, the study introduced third party feedback provided by the single-heritage relatives.As described in the methodology above, embedded multiple case study design that was employed included the case participant as well as his or her family member representing either the singleheritage maternal or paternal side of the family.This way data from the mixed-heritage participants' self-reports are verified against the feedback from their single-heritage relatives.At the same time, the self-report data are also further verified through retrospective semistructured interviews (Mahanita, 2016).This corroboration of data elicited from the selfreported responses as prompted by the vignettes provided some measure of validity and reliability in regards the authenticity of the responses in lieu of real observed behaviours.
DEVELOPMENT AND CONSTRUCTION OF VIGNETTES
An important aspect about vignettes to be explained here is their development and construction.This section provides a description of the method employed in developing and constructing the vignettes used for the aforesaid study by Mahanita (2016).It is hoped that its explication will guide further use of vignettes in linguistic anthropological type of studies.
Authenticity and relevance are two aspects that should be aimed for in developing a vignette (Renold, 2000;Hughes & Huby, 2004) in order to ensure the quality of legitimacy in responses as well as encourage quantity of data elicited by the participants.The length of vignettes can also affect the quality and quantity of data elicited.Previous known users of vignettes in social work and psychology studies (Shin, 2010;Wallace, 2001;Wilson & While 1998;Pao, Wong & Teuben-Row, 1997) found that longer texts were found to generate careless and irrelevant data due to loss of interest while reading by the participants.On the other hand, the use of shorter vignettes consistently elicited optimum response rates in terms of succinct, concise responses and in shorter duration.The vignettes constructed by the researcher for the discussed study is short self-contained exemplars of typical scenarios and situations experienced by the mixed-heritage participants.
For this study on mixed heritage individuals' claim for legitimacy, two main sources of information informed the development of the vignettes.The first were detailed recollections of actual occurrences or occasions of their lived experiences that were retrieved through informal conversations and interviews with them (Mahanita 2016).Meanwhile the second source came from descriptions of events observed in past studies on mixed-heritage individuals in similar situations (Carlson, 1996;Cheek & Jones, 2003;McKeganey et al., 1995;Barter & Renold, 1999;Rahman, 1996).
Seven scenarios were constructed for the current study depicting a range of recurring problems experienced by mixed-heritage individuals in their family realm.These scenarios included those listed below: To ensure internal validity and reliability of the constructs developed for the vignettes, they were piloted on five other mixed-heritage individuals between the ages of 20-24 years old.Nine vignettes that were originally piloted were reduced to seven as two particular vignettes were found to be redundant by the participants.For detailed description of the pilot study conducted please refer to Mahanita, Nor Fariza and Hazita (2016) and Mahanita (2016).Below (Fig. 1) is a sample of a vignette developed and administered for the study sourced from Mahanita (2016, p. 266).Other samples of vignettes are included in the Appendix.
Asha is a mixed-heritage girl.Her father is an Indian man, whereas her mother is a Bidayuh lady.Her father's heritage language is Tamil and her mother's heritage language is Bidayuh.She is very fluent in English language and her command of Bahasa Malaysia is good also.Unfortunately, she is not able to speak Tamil or Bidayuh except that she knows a few words from these two languages.When asked about herself, she claims to be both Indian and Bidayuh.She desperately wants to be able to share their jokes or gossips and also be able to express her inner-self to them; but she is unable to.In short, she is unable to reach out to her heritage groups at a deeper level because she lacked proficiency in their languages.What do you think of Asha's problem?Have you experienced a situation like this in your family?How did you react?Additionally, care was taken to ensure that the content of the vignettes were plausible and meaningful to the participants as recommended by Braun and Clarke (2003).Procedurally, in the actual study, the mixed-heritage female participants were given vignettes with female character, as were the male participants.They were then presented with each vignette depicting a scenario and were given time to reflect on their own similar experiences triggered by the scenario.While reflecting, each participant were asked to reflect and write short notes on their thoughts, feelings and actions with regards to how they dealt with each scenarios.At the end of their reflection they were asked to respond immediately to two openended questions and their responses were recorded and then transcribed for analysis.They were reminded to respond strictly from their own personal viewpoint.According to Hughes and Huby (2014), the open-ended questions posed with vignettes should facilitate to generate responses that should be similar to the participants' real life reactions.For this study, reflections that inform about the range of communication strategies employed by the participants when attempting to communicate with their maternal or paternal relatives, were of particular interest.Semi structured interviews were conducted on both the mixed-heritage participants and their related single-heritage family members before the vignettes sessions to profile in detail their personal and family backgrounds, and after, to clarify further their reactions as reported in their responses.
All responses to the vignettes and semi-structured interviews were audio-recorded and then transcribed verbatim using the playscript style (Gibson, 2010).The completed transcriptions were returned to the respective participants for content verification within three days so as to confirm the transcribed contents were accurately documented as recommended by Kurata (2011) and Hassan (2006).The verified transcription is then perused to identify the descriptions of communication strategies that the participants reported they used in their communication circumstances with their single-heritage family members.Simultaneously the examination of the transcribed responses to the vignettes revealed data regarding the affected participants' feelings about their inability to speak their heritage languages with the family members.Both of these data sources were coded and categorized accordingly.In addition the data source elicited from the vignettes received further verification through comparisons made with theoretical perspectives as well as past related literatures.
FIVE PRINCIPLES IN DEVELOPING VIGNETTES
Based on the use of vignettes in the referred study, the following five principles are put forward for consideration when conceptualizing vignettes for the qualitative paradigm.Firstly, the stories developed for the vignettes must have comparable dimensions of internal consistency to be relatable and authentic.This is to enhance participants' engagement with the situations described.Secondly, to elicit a reliable range of responses representative of actual reactions in real life situations, the depicted experiences should range from normal to unusual occurrences.Thirdly, the vignette should also have an inherent ambiguity in its content to be non-directional and non-prescriptive.While, it needs to contain sufficient features of typicality for the situation to be identifiable, it should be vague enough to force the respondents to interpret the situation from their personal perspectives.This concept is promoted by West (1982, p. 9) as 'fuzziness', which he regarded as value in this technique since it leaves the participant room to define the depicted situation in his own terms (Finch, 1987).Fourth, in relation to the previous point, the participants should be asked to respond at two levels, that is to first provide culturally and socially desirable responses and at another level, how they think they would actually respond personally in that situation.Finally, the format in which the vignettes are presented should be appropriate to the participating individuals and the objectives of the study.While written narratives texts, as used in the heritage language study, are most common, images such as picture scenarios, video recordings, music videos, music and computer assisted reproductions are varied mediums that could be introduced and employed.The following section on analysis of data findings elucidates the extent to which the use of vignettes employed in the study on mixed-heritage legitimacy elicited data quality which were reliable and valid, hence underlining their value as a research technique in linguistic anthropology.
TRUSTWORTHINESS OF VIGNETTES AS AN ELICITATION TECHNIQUE
This section will show the extent to which the use of vignettes achieved the main objectives of the study on mixed-heritage individuals and their legitimacy issues.The findings of the data analysis underscores the feasibility of the use of vignettes as a data elicitation technique in place of participant observation, as well as the comparable authenticity of the vignettes used in the study in its ability to encourage realistic disclosures from the members of the ethnic groups.As described earlier, the vignettes were supported with semi-structured interviews as well as fieldnotes, and aimed at revealing the extent to which lack of proficiency in heritage languages would affect a mixed-heritage individual's standing among their single-heritage group members in terms of his or her legitimacy as a heritage group member.
In the previous section, a framework of how vignettes were designed and applied in the discussed study was provided, and the resulting principles derived from this development and application has been put forward as guide for future applications of vignettes as a plausible technique in qualitative linguistics anthropology research.What is also relevant for this article is the question whether the use of the constructed vignettes has generated trustworthy data that reveal typical and natural occurring responses representing actual behaviours in real circumstances of a mixed-heritage situation.To demonstrate the trustworthiness of the vignette technique, the quality of data and its analysis as generated from the responses to the vignettes are used to illustrate this potential reliability.
In the referred study, vignettes were produced to depict difficulties among the four mixed-heritage individuals to communicate in their heritage languages when interacting with the four single-heritage family members.These heritage languages suggesting their parents' family heritage backgrounds, include an interesting range from Tamil (spoken by descendants of Indian heritage), Bidayuh (spoken by descendants of Bidayuh indigenous heritage from Sarawak), Telegu (spoken by a smaller number of descendants from southern Indian heritage), Dutch (spoken by descendants from Netherland heritage), Sundanese (spoken by descendants from Sundanese heritage originating from Western Java, Indonesia), Malay (spoken by the descendants of Malays heritage and most Malaysians), Punjabi (spoken by descendants of the Punjabi heritage) and Thai (spoken by descendants from Thailand heritage).Table 1 below summarises the findings about the linguistic repertoire of the mixedheritage individuals' and their heritage language backgrounds.The information in Table 1 above illustrates a sample of data that is revealed through the vignettes.In this case study, the participants revealed that almost all of them are unable to speak their parents' heritage languages (except for MX3-S who speaks maternal heritage language-Malay) and rely primarily on Malay and English as the dominant vehicular languages when speaking to their single-heritage parent and relatives.Additionally, the vignettes prompted the mixed-heritage participants to reveal what they typically and frequently do when attempting to respond to a single-heritage family member speaking to them in their respective heritage languages.Upon analysis, these responses and reactions by the participants, as described by them, were identified as communication strategies.With reference to the sample highlighted above, the analysis revealed that the mixed-heritage individuals employed various communication strategies ranging from appeal for help to feigning understanding to salvage interactions with their monolingual family members.Some extracted examples of these are as illustrated below: 1. Appeal for help--MX2-N: "I ask them what they are saying..", 2. Inferencing--MX3-S: "I try to guess words based on context of conversation..", 3. Circumlocution (e.g.MX2-N: "I attempt to combine simple words to express meaning of message.." 4. Miming--MX2-N: "I use hands to signal meanings.." 5. Language switch--MX1-L: "I may begin my reply in the heritage language but switch to Malay or English after that.." 6. Feigning understanding--MX2_N: "I just nod and smile.." (Mahanita, 2016, pp. 172-175) Nevertheless, cross analyses with response patterns from interviews with the single heritage family members revealed high instances of accommodation by them suggesting that the maternal and paternal relatives were tolerant and flexible with regards to the participants' inability to communicate using the heritage language.Interestingly, in contrast to the perceptions of the mixed heritage individuals, this generous tolerance of accommodation in response to their lack of proficiency in the heritage language by the single heritage members suggested that proficiency in the heritage language, although is valued, is not a crucial requisite for gaining legitimacy within the heritage group (Mahanita, 2016:196).Although the mixed-heritage individuals get by with the vehicular languages and compensation strategies, they still harbour negative perceptions regarding their own inability to speak their heritage languages, even though the single-heritage families do not demand this of them.Table 2 below provides a briefed insight into their views on this matter.Source: Mahanita, 2016, pp. 167-168 In general, the excerpts from the responses generated by the vignettes show that the participants are disappointed with themselves for not being able to communicate using their heritage languages.The range of emotions that they expressed in response to the vignettes include "regret" and "angry" (participant MX1-L); "sad" (participant MX2-N); "left out" (participant MX3-S); as well as "feel sad" (participant MX4-H).These expressions of disappointment underlie the feelings of inadequacy or inferiority that fester in them as they perceive they lack one of the most important cultural credentials of their heritage groups.The authenticity of the findings and quality data generated from the vignettes is illustrated in Figure 2 below.The following exemplifies a selected vignette given to MX4-H which had prompted her to disclose her emotions towards the scenario depicted in the vignette.She had immediately identified with the scenario which triggered an emotional response as disclosed in Table 2 above.The scenario in the vignette had induced her to reveal that she felt sad but will compensate in other ways to be accepted when faced with similar real life experiences on several occasions before.
Naveena is a mixed-heritage person.Her father is Indian and her mother is Kelabit.She speaks fluent Bahasa Malaysia and English because she learnt these languages from school.Every year, they celebrate Deepavali and Gawai.However, during Deepavali she has to be among her paternal relatives, who all speak Tamil.Even though they are her relatives, she feels somewhat uncomfortable to be among them because she cannot understand a word they are saying.She feels she is Indian and Kelabit at the same time, but there are also times when she feels like she is an outsider.As a result, she decided to try to solve the problem.She thought to herself, if she is unable to speak the language, then maybe she should focus on other aspects of being Indian when she is with her paternal relatives.One way to do it is to wear more of "salwar khameez" which is commonly worn by Indian women.Notably, it can surmised that the mixed-heritage individuals' integration into the heritage group is defined by the extent to which they embrace the cultural ways and the related religious practices of the maternal or paternal single heritage families.Based on the cross analyses of responses generated from the vignettes, the single-heritage members of the heritage groups expressed that the mixed-heritage individuals will be accorded heritage legitimacy instead, if they demonstrate cultural credentials and kinship interests such as attending religious and family rituals, and befriending others from same heritage groups.This revelation concurs with Khanna (2004) and Wallace (2001), as cited in Mahanita (2016, p. 199), whose studies similarly found that consistent and persistent exhibitions of cultural traits and demonstrations of kinship interest suggest an individual's heritage inclination which accords the individual heritage legitimacy by the heritage group.Hence, the data elicited in response to the vignettes significantly revealed that among the four heritage groups investigated, fluent proficiency in a heritage language is not a qualification for attaining heritage legitimacy into a particular heritage group as feared by the mixed heritage individuals.Table 3 below illustrates the range of cultural credentials that the mixed-heritage individuals had reportedly demonstrated which had gained them their heritage legitimacy as claimed by the single-heritage family members.
Jewellery Punjabi
The cultural credentials which are listed in table 3 were noted as alternatives that the single-heritage members recognize as acceptable compensations for the mixed-heritage individuals' lack in their heritage language proficiency.Evidently, practicing them permits the mixed-heritage individual to claim their legitimacy within their respective groups.This tendency to compensate with cultural credentials according to Fernandez (1996) can dangerously be obsessive if the mixed-heritage individuals overcompensate in proving themselves as "more pure" (Fernandez, 1996, p. 31 as cited in Mahanita, 2016, p. 181) than members of their heritage groups.He cautioned this tendency to overcompensate for a perceived shortcoming is detrimental to the self-identity in the long run as it prevents individuals from addressing the source of their inferiority and overcoming it.
However, the evidence from the responses in the study on mixed-heritage suggest that the compensation behaviours reported by the participants seem to be fluid and flexible in terms of which cultural credential they want to accentuate, in which context, with whom and when they feel they want to do so.
In sum, the findings of the study on heritage legitimacy of mixed-heritage individuals revealed that these individuals were accorded legitimacy as a member of their heritage groups even though they were not able to speak or understand their heritage language.Instead they were accorded heritage legitimacy based on cultural credentials and kinship interest that they consistently and persistently exhibited as witnessed by the single heritage members.Even so, there is evidence to suggest that heritage inclination towards maternal or paternal or even both sides of the heritage groups is dependent the individuals' perception of the singleheritage families' acceptance of the individual, as well as the degree of closeness of their relationship to either groups.
CONCLUSION
In this article, application of vignettes as a technique for rich qualitative data elicitation in anthropological linguistics type of research is described and illustrated through the mixedheritage legitimacy study by Mahanita ( 2016) as an exemplar.The option to use vignettes in the aforementioned study is borne out of necessity as the eight participants, comprising four mixed-heritage and four single-heritage individuals were reluctant to give access to the researcher to observe and record behaviours pertaining to the use of (or lack of) heritage languages in their community and families' realm and private spaces.Although vignettebased methodologies are frequently used in the quantitative paradigm to examine judgment and decision making processes particularly in the clinical, behavioural, social work and health sciences domains, there are few known accounts about the use of vignettes within the qualitative research paradigm (Wilks, 2004;Hughes & Huby, 2002;Barter & Renold, 2000;Landau, 1997;Kugelman, 1992).This article, thus, contributes to highlight the potentials of vignettes as a qualitative data elicitation tool in place of participant observations where or when it is not plausible, or more significantly as a complimentary tool to allow the researcher to accentuate richer and more expansive insights that will generate patterns of behaviours for a more comprehensive analysis of a phenomenon.
Three defining features of the vignettes can be highlighted through the study on mixed-heritage legitimacy described in this article.Firstly, it is notable, based on the available evidence from the cross analysis conducted in the study, that the use of the vignettes demonstrated the feasibility of the scenarios depicted in generating similar responses to reallife scenarios of mixed-heritage individuals claiming heritage legitimacy.Several comparison studies and reviews in the quantitative field have yielded similar methodological conclusions where vignette methodologies demonstrated little difference from observations of actual behavior (Evans et al., 2015).Secondly, the utilization of vignettes is considerably high in terms of its flexibility and efficiency.As demonstrated in the development and construction of the vignettes exemplified in this article, the content of the vignette can be carefully tailored to provide accurate and concise contextual content concretely.Additionally, it is necessary to ensure a level of detail in them to support their realism and credibility as reproductions of natural occurrences, while omitting unnecessary and irrelevant information.Hence a carefully structured methodology using vignettes as an elicitation technique is more efficient in that it saves observation time, personnel of observers, funding and other related resources needed to carry out participant observation.Thirdly, this article and the literature suggest that vignettes as a technique is valid, reliable, inexpensive, and practical for phenomenological types of investigations.Regarding validity and reliability, Gould (1996) and Veloski et al. (2005) contend that a major advantage of using vignettes is that participants are less likely to be influenced by the act of observation, as the distance afforded by the vignettes as well as indications of confidentiality and the non-evaluative nature of its design minimizes the observer effect where the individuals being observed may modify their behavior because of being observed.This reaction may have an impact on the in the findings.By the same token, the revelation by the mixed-heritage and single-heritage participants in the study highlighted in this article demonstrated these advantages.
Clearly, vignettes as a technique for data elicitation and even as a vignette-based methodology, evidently can be a flexible, practical and powerful tool, suited for studying multilingual and multicultural phenomenon that are usually highly sensitive and exclusive in nature.As noted earlier, previous limitations to using vignettes as an elicitation tool in qualitative study lie mainly in the lack of direction of how to develop vignettes that are truly representative of an observed occurrence.The element of authenticity is vital to ensure that the responses are true and not imagined.Thus, in addressing this gap, this article has provided the principles for development and construction of vignettes for socio-cultural linguistics contexts and has exemplified its use in a qualitative linguistics anthropology study through the mixed-heritage study.
FIGURE 1 .
FIGURE 1. Sample of a Vignette Used in the Mixed-Heritage study
TABLE 1 .
Linguistic Repertoire and Heritage Language Background of Mixed-Heritage Participants
TABLE 2 .
Perceptions of the Mixed-Heritage Individuals Regarding their Inability to Speak and Understand their Heritage Languages
TABLE 3 .
Range of cultural credentials employed by mixed-heritage individuals to claim legitimacy | 9,580.2 | 2017-11-29T00:00:00.000 | [
"Linguistics",
"Sociology"
] |
An Optimal Path Management Strategy in Mobile Ad Hoc Network Using Fuzzy and Rough Set Theory
: Problem statement: Mobile Ad Hoc Network (MANET) is a collection of wireless mobile nodes that dynamically forms a network. Most of the existing ad-hoc routing algorithms select the shortest path using various resources. However the selected path may not consider all the network parameters and this would result in link instability in the network. The problems with existing methods are frequent route change with respect to change in topology, congestion as result of traffic and battery limitations since it’s an infrastructure less network. Approach: To overcome these problems an optimal path management approach called path vector calculation based on fuzzy and rough set theory were addressed. The ultimate intend of this study is to select the qualified path based on power consumption in the node, number of internodes and traffic load in the network. Simple rules were generated using fuzzy and rough set techniques for calculating path vector and to remove irrelevant attributes (resources) for evaluating the best routing. The set of rules were evaluated with proactive and reactive protocols namely DSDV, AODV and DSR in the NS-2 simulation environment based on metrics such as total energy consumed, throughput, packet delivery ratio and average end-to-end delay. Results: The results have shown that in MANET, decision rules with fuzzy and rough set technique has provided qualified path based best routing. Conclusion: The network life time and performance of reactive and proactive protocols in MANET has improved with fuzzy and rough set based decision rules.
INTRODUCTION
MANET is a collection of mobile nodes without any fixed infrastructure. They can be set up quickly where the existing infrastructure does not meet application requirements for reasons such as security cost or quality. MANET consists of nodes which can move freely and can communicate with other nodes by means of a direct link or by relaying through intermediate nodes. The performance of the network suffers as the number of nodes grows and a large network quickly becomes difficult to manage. There are various routing protocols designed specifically for MANET such as Ad Hoc on-Demand Distance vector (AODV), Dynamic Source Routing (DSR), Destination Sequence Distance Vector (DSDV) and Wireless Routing Protocol (WRP).
One of the key challenges in MANET is routing. Researchers have been investigating to find the shortest path from source to destination by applying varying methods. There exist numerous routing paths from source to destination node (Perkins and Bhagwat, 1994;Perkins and Royer, 1999) for data transfer. At present, the fields like fuzzy and rough set theory are having an efficient role in handling wireless network.
Fuzzy set theory is based on the degree of membership function (Zadeh, 1965). The membership function allows its value in the interval [0, 1]. Rough set theory proposed by Pawlak (1982) is an extension of classical set theory for dealing with vagueness in the real world. Its concepts and operations are defined based on the indiscernibility relation. It has been successfully applied in selecting attributes to improve the effectiveness in deriving decision rules (Jensen and Shen, 2007). Also, this approach will lead researchers to focus on benefits of non-algorithmic models to overcome the estimation problems (Attarzadeh and Ow, 2010).
Integrating the advantages of fuzzy and rough set theory, this study proposes a hybrid system to select an effective routing path in MANET. In the first stage, the data set consisting of resources and paths are fuzzified.
In the second stage, information gain is calculated by using ID3 algorithm for evaluating the importance among attributes. In the third stage, a decision table can be reduced by removing redundant attributes (resources) without any information loss. In the fourth stage, IF (condition) -THEN (outcome) decision rules can be extracted from the equivalence class to select the best routing path. Finally, set of rules were evaluated with proactive and reactive protocols namely DSDV, AODV and DSR in the NS-2 simulation environment. An example is also presented to show the applicability of the proposed method.
MATERIALS AND METHODS
The motivation for an analytical solution of path selection is based on various research efforts. A number of routing protocols such as AODV, DSR, DSDV and WRP have been proposed for Ad Hoc networks.
AODV is loop-free, self-starting and scales to large number of mobile nodes. It is a reactive protocol in which routes are created only when they are needed. It uses traditional routing tables, one entry per destination and sequence numbers. It determines up to date routing information and prevents routing loops.The modifications to AODV are more useful to moderately loaded high mobility networks (Rani and Dave, 2007).
DSR protocol is based on source routing where all the routing information is maintained (continually updated) at mobile nodes. However, it uses source routing instead of relying on the routing table at each intermediate device.
The main contribution of DSDV protocol is to solve the routing loop problem. Each entry in the routing table contains a sequence number, the sequence numbers are generally even if a link is present; else, an odd number is used. The number is generated by the destination and the emitter needs to send out the next update with this number. Routing information is distributed between nodes by sending full dumps infrequently and smaller incremental updates more frequently.
WRP uses an enhanced version of the distancevector routing protocol, which uses the Bellman-Ford algorithm to calculate the paths. Because of the mobile nature of the nodes within the MANET, the protocol introduces mechanisms which reduces the routing loops and ensure reliable message exchange.
In FCMR (Fuzzy Cost Based Multipath Routing) protocol, the traffic is distributed amongst the best selected paths from the existing multipath routing. The selection is based on consideration of six resource constraints such as bandwidth, computing efficiency, power consumption, traffic load, the number of hops and total vector cost (Raju and Ramchandram, 2008).
An alternative approach based on fuzzy and rough set methodology is described in this work for the selection of best routing path with minimum number of resources.
Fuzzy set theory: Fuzzy set theory was first proposed by Zadeh (1965). The main objective of this theory is to develop a methodology for the formulation and solution of problems that are too complex or ill-defined to be suitable for analysis by conventional Boolean techniques. A fuzzy set can be defined as a set of ordered pair A = {x, µ A (x)/x∈U}. The function µ A (x) is called the membership function for A, mapping each element of the universe U to a membership degree in the range [0, 1]. An element x∈U is said to be in a fuzzy set if and only if µ A (x) > 0 and to be a full member if and only if µ A (x) = 1. Membership functions can either be chosen by the user arbitrarily, based on the user experience or they can be designed by using optimization procedures. The triangular membership function is defined as: Rough set theory: Rough set theory is an extension of conventional set theory that supports approximations in decision making (Pawlak, 1982;Duntsch and Gediga, 1999;Skowron et al., 2002;Pal and Skowron, 2003). A rough set is itself the approximation of a vague concept (set) by a pair of precise concepts, called lower and upper approximations, which are a classification of the domain of interest into disjoint categories. The lower approximation is a description of the domain objects which are known with certainty belong to the subset of interest, whereas the upper approximation is a description of the objects which possibly belong to the subset. It provides useful information about the role of particular attributes and their subsets and prepares the ground for representation of knowledge hidden in the data by means of IF-THEN decision rules.
Information system: An information system can be viewed as a table of data, consisting of objects (rows in the table) and attributes (columns). An information system may be extended by the inclusion of decision attributes. Such a system is termed as decision system. Suppose we are given two finite and non empty sets U and A, where U is the universe and A, a set of attributes. With attribute a∈A, we associate a set (value set) called the domain of a. Any subset B of A determines a binary relation IND (B) on U which will be called an indiscernibility relation Eq. 1: where, IND (B) is an equivalence relation and is called B-indiscernibility relation.
Lower and upper approximation: Let us consider B⊆A and X⊆U. We can approximate X by using only the information contained in B by constructing lower approximation (2) and upper approximation (3) of x in the following way Eq. 2 and 3: And: Equivalence classes contained within X belongs to the lower approximation whereas equivalence classes within X and along its border form the upper approximation. Let P and Q be set of attributes including equivalence relation over U, then the positive region is defined as Eq. 4: where, POS P (Q) compromises all objects of U that can be classified to classes U Q using the information contained within attributes P.
ID3 Entropy: Attribute selection in ID3 (Wang and Lee, 2006) and C4.5 (Quinlan, 1992) algorithms are based on minimizing an information entropy measure applied to the examples at a node. Entropy has widely applied to many fields. The entropy measure is used to select the attributes providing the highest information gain.
Quinlan's ID3 decision tree algorithm grasps the entropy concept for attribute selection. A data set with some discrete valued condition attributes and one discrete valued decision attribute can be presented in the form of knowledge representation system J = (U,C∪ D),where, U={u 1 ,u 2 ,…..,u s } is the set of data samples, C={ c 1 ,c 2 ,…..,c n } is the set of condition attributes and D={d} is the one-elemental set with the decision attribute or class label attribute . Suppose this class label attribute has m distinct values defining m distinct classes, d i (for i = l, 2... m) and let s i be the number of samples of U in class d i . The entropy for a subset is given by Eq. 5: where, P i is the probability that an object is in i th class log 2 is log base 2. Gain (S, A), an information gain of example set S on attribute A is defined as Eq. 6: where, Σ is each value v of all possible values of attribute A, S v is subset of S for which attribute A has value v, |S v | denotes the number of elements in S v and |S| denotes the number of elements in S.
Illustrative Example:
A data set of resources allotted to five paths is given in Table 1 to select efficient path.
Fuzzifying the dataset: From Table 1, we consider bandwidth, computer efficiency, power consumption, traffic load and number of internodes as five condition attributes and total vector cost as a decision attribute to represent minimum cost for the selection of best path. Initially, in order to represent a continuous fuzzy set, we need to express it as a function which maps each real number to a membership degree. A very common parametric function is the triangular membership function which can be derived through automatic adjustments. Each attribute have three fuzzy regions (low, medium and high) described as follows: Band width: Low (0, 0.2, 0.4) Medium (0.3, 0.5, 0.7) High (0.6, 0.8, 1.0) Computer efficiency: Low ( Table 2. Information gain: ID3 uses an information theoretic approach aimed at minimizing the expected number of tests to classify an object. Using (5) and (6), the information gain for each attribute is calculated. We get Gain (Bandwidth) = 0.24, Gain (Computer efficiency) = 0.42, Gain (Power consumption) = 0.44, Gain (Traffic load) = 0.94 and Gain (Number of internodes) = 0.54.Since, Power consumption, traffic load and number of internodes has the highest information gain among the five attributes, bandwidth and computer efficiency may be excluded due to their less importance. The data set is shown in Table 3.
The decision attribute (Total vector cost) has two values, Good and Poor. Each value may be classified into its partition. From Table 3, it is clear that X G = {2, 3, 5} and X P = {1, 4}. It means path 2, 3 and 5 belong to partition X G and path 1, 4 belong to partition X P . For each partition, identifying the C-lower approximation of X Y and X N, we have CX G = {0} and CX P = {0}. Hence, building the positive region by combining the C-lower approximations of the two partitions: POS C (D) = {1, 2, 3, 4, 5}. From POS C (D), C-equivalence classes in the positive region are constructed and are shown in Table 4.
The calculated result is shown in Table 5. Reduct i of an equivalence class should be able to distinguish Equiv i from all other equivalence classes. Reduct i should be the joint of the entries in the i th row of the discerning matrix. Using Boolean operation, we get: Finally, the decision table can be built to extract the rules.
From Table 6, we can extract decision rules in IF-THEN form. Here the condition attribute values (Traffic load = high, No. of internodes = low) are used as the rule antecedent and class label attribute (Total vector cost = Poor) as the rule consequent. Hence, we can extract the following decision rules: Hence path 2, 3 and 5 are considered as the best path.
Simulation environment:
The simulations were carried out in the Network Simulator NS-2 with the area of 1000×1000 m for 5, 10, 30 and 50 mobile nodes. The simulation time is 200 sec and each simulation is performed under varying pause time, number nodes and packet size. The pause time indicates the amount of time that a node will pause in between two transitions. The pause times considered for this particular simulation are 10, 50, 100 and 150 sec and 10 movement patterns for each value of pause time. A pause time of 10 sec would denote a rapidly changing network topology and a pause time of 150 sec would denote a relatively stable network. The numbers of traffic sources considered are 1, 3, 5 and 7. The speeds of the nodes are randomly assigned during the creation of the mobility pattern. The speed varies between 0 and 20 m s −1 . The traffic is sent with different the packet size of 256, 512, 1024 and 2048 bytes and the packet interval time is 10 ms. The bandwidth of the wireless links is 11Mbps, similar to those of an 802.11b based network. Under the above conditions we have studied the path management using three ad-hoc routing protocols namely AODV (Perkins and Royer, 1999), DSDV (Perkins and Bhagwat, 1994) and DSR (Baiamonte and Chiasserini, 2004).
The metrics used for comparison are: Initial energy (Battery) 150 Joules Transmission power 0.9 W Reception power 0.8 W Idle power 0.2 W Sense power 0.0175 W Total energy consumed: Total energy consumption for each of the simulation and divided them by the total number of successfully received bytes.
Throughput: Throughput is the total number of Kilo bits (Kb) of data successfully received by the receiver per unit time (second).
Packet Delivery Ratio (PDF):
The packet delivery Ratio is the ratio of total number of successfully received packets to the total number of sent packets.
Average end-to-end delay of data packets: This is the average delay between the sending of the data packet by the constant bit rate source and its receipt at the corresponding constant bit rate receiver.
RESULTS AND DISCUSSION
Total energy consumed: The evaluation of energy consumption is particularly important in case of mobile ad-hoc environment as it is an infrastructure less network. For evaluating the energy consumption of the routing protocol, we use the energy model that is built into the NS2 network simulator. This energy model (Baiamonte and Chiasserini, 2004) is built around the IEEE 802.11 MAC protocol. In general a network interface is always in one of the four possible states: Transmit, receive, idle and sleep. The power requirement for transmit and receive mode remain high but for idle/sleep mode it is low. The parameters used for energy model in the simulations are.
In our simulation energy is measured in two diverse means first the total energy consumption is calculated for number of intermediate node and second for multiple connection/data flows/traffic. From the Fig. 1a, it is apparent that the total energy consumption of a node increases as the traffic in the network increases. The energy cost increase with the increase in nodes that is more predominant in the case of on-demand routing protocols than table driven protocols. This could be associated with the increase in the number of routing packets required to maintain routes to more destination nodes in the case of ondemand routing protocols. However, proactive routing protocols by default maintain routes to all possible destinations within the network irrespective of whether there is any data to be sent to that destination or not.
For the multi source-single destination scenario the total energy consumption of a node increases as the traffic in the network increases, DSR (Fig. 1b) is observed to consume the maximum energy. This is due to the unnecessary loss in valuable energy, resulting from transmission of packets along stale routes. Throughput: In our simulation throughput is calculated in three diverse means first by calculating received packets with respect to different pause time for single connection/data flow/traffic, second by calculating received packets from multiple connection/data flows/traffic and taking the average of all these connections to obtain the throughput and third by increasing the intermediate nodes. Figure 2a shows the effect of throughput from single connection/data flow/traffic for different pause times, here the packet received (throughput) AODV, DSDV and DSR remain high for different pause times. But in the case of multiple traffic connections/flows (Fig. 2b), it is observed that the AODV, DSDV and DSR throughput rapidly reduces as the number of flow increases. This due to the fact as the number of traffic connections/flows introduces more congestion, packet drops and processing delay in the intermediate nodes.
In contrast the lesser number of traffic connections/flows throughput remains high. It is observed from the Fig. 2c that throughput increases as the number of intermediate nodes between source and destination increases. The thick concentration of nodes gives the advantage of solid connectivity between pair of nodes this in turn reduces the probability of packet drop both in proactive and reactive protocols. The packet delivery Ratio for the three protocols AODV, DSDV and DSR were analyzed with increase in intermediate nodes and different connection/flow/traffic. The packet delivery ratio (Fig. 3a) increases as the number of intermediate node increases in AODV, DSDV and DSR for single connection/flow/traffic, Less number nodes creates instability of link i.e.,) packets are dropped due to non availability of routes and it leads to the formation of holes/gap in network. In contrast as the number of nodes increases the probability of packet drop will be less and it also avoids holes/gap formation in the network.
From the Fig. 3b it is observed that the packet delivery ratio for AODV, DSDV and DSR decrease as the number of traffic connections/flows and increase in load. Initially on low traffic load the AODV, DSDV and DSR performs better, as the load increases PDR decreases. Similarly increases in number of traffic connections/flows in AODV, DSDV and DSR leads to congestion in the intermediate nodes and it can't able to appropriately deliver the packet to the destination due to frequent packet drops in the forwarding nodes.
It is evident from Fig. 4a-b for AODV, DSDV and DSR that End-to-End Delay increases with (i) Increase in number of intermediate nodes: Higher number of node the increases the hop duration of the packet travelling from source to destination. (ii) Multi source traffic/connections/flows that cause congestion in the network and this led to packet delay. In case of congestion more and more packets are queued in the router buffer that is located along the path to the packets' destination. In the worst case, the buffer will overflow causing the router to discard packets. The propagation delay will continually increase until the congestion is cleared.
CONCLUSION
From the graph and analysis of fuzzy and rough set based path vector calculation three conclusions were made for stable path management with effective usage of available resources so as to maintain the stable link and to increase the network life time.
Network with significant number of intermediate node decreases the possibility of link failure since it's been inter connected solidly, the packet delivery fraction also gets increased as the packet drop in the network is reduced with least like hood of holes formation/gap between nodes and delay will be reduced as the least time required for route establishment.
For a stable link, the routing path is to be established with less energy consumed intermediate nodes but not on the basis of shortest path. Node with heavy consumption results in link failure since its infrastructure less mode of propagation and it leads to packet drop, delay, decrease in throughput and formation of holes in the network.
Number of traffic/connections/flows cause congestion in the network and it would result in delay; there is also a gradual decrease in throughput, increase in the total energy consumption, packet drop and delay, with increase number of flow.
From the conclusions it is apparent that to maintain a good routing path, path number 2, 3 and 5 from the rule Table 4 are considered to be the best qualified path that will guarantee the link stability and increase the network performance. | 5,093.2 | 2011-10-15T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
How Can We Define Mastery ? Reflections on Learning , Embodiment and Professional Identity
In this article we reflect upon the knowledge production process through the body in the social-material arrangement of craft. Resorting to the embodiment paradigm, we aim to theoretically understand how someone reaches the mastery that characterizes the domain of craft skill. In analogy with craft practices, we analyze how knowledge that relies under practical performances such as skill are built and kept through the bodily relation with the making of things in the immediate contact with the world. At the end, we conclude that such reflections about mastery may be useful to investigations on professional identity.
Introduction
This article aims to foster a reflection on the corporal knowledge-production process in the socialmaterial arrangement of craft.Falling back upon phenomenology to explain the learning experience as a result of the perception of the living body, we seek to theoretically understand how one achieves the so-called mastery related to craftwork know-how.As mastery can be defined as practical proficiency, or as the embodied comprehension of a practical knowledge, we focus on the way how skill constitutes and maintains itself through the bodily relation that someone achieves while performing practices with high levels of proficiency.The objective of this paper, though, is to develop a theoretical background to analyze the kind of practical know-how, such as mastery, that relies on some embodied features of its practitioners to build professional identities.
The intended contribution to the field of Organization Studies (OS) is to theoretically reinforce the concept of mastery as a way to understand skill as resulting from knowledge embodiment.In this respect, even though practice-based studies on OS have privileged the analysis of knowledge embodiment as lived experience, the lack of concrete references to the cultural context where it takes place ends up making most of those studies describe such processes in a superficial way (Sandberg & Dall'Alba, 2009).While traditional ontology takes the principle of detachmentmeaning people are essentially detached from the world, but get connected to it as they perform and experience many practical activities during the process of living in the worlda phenomenological perspective such as the one we take in this article considers that the entwinement of the person with the world is the defining condition of being.
Within craft, mastery relates to a way of being that rises from practice, during the embodiment process of a skill.Being a master craftsperson refers to the social identity of someone due to the skills that this person has learned with his/her body while performing such practice.Departing from that, we can achieve the understanding that the person who embodies a skill not only has the knowledge that describes his/her professional formation, as the embodiment of a skill changes the whole person, transforming him/her into a skillful body.We understand that the enduring contact of someone with a certain kind of practical knowledge is a defining experience, in such a way that professional and personal identities are never separable and both cannot exist beyond the context of practices regarding this profession.
As we intend to discuss professional identity taking craft as a powerful example and/or object of reflection, considerations on professions and professionalization are also necessary.Professions are understood as occupations with special attributes, where cognitive dimension is centered on a certain body of knowledge and techniques and on the training necessary to master such knowledge (Larson, 1977).In a modern perspective, professions are closely connected to social structure (Parsons, 1939) and to an ideal of work that has been developed in the West, particularly after the Industrial Revolution.If in the pre-modern period professions would be considered as a matter of tradition (Sennett, 2008), modern professions depend on practical and intellectual institutions that legitimate structures of authority and competence, such as universities (Jackson, 1970).The professional environment, understood as an institutional environment, has deep implications on the process of embodiment of professional know-how and the forge of professional identity.
However, the idea of identity that we support in this paper has no connection to the performance of social actions, as we want to show that the embodiment of professional identity goes beyond learning the cognitive and normative features of a certain kind of work.To accomplish this we briefly examine the field of sociology of professions, beyond the bureaucratic tendencies of its traditional approaches (Klegon, 1978;Parsons, 1939), in order to clarify how professional practices shape ways of being in the world that goes way outside the limits of occupational tasks performed by social actors within an institutional framework.Notwithstanding how the discussions within the sociology of professions field develop in the background of modern paradigms, we turn to craft as an illustration of a pre-modern or a non-modern profession that can be analyzed through the lenses of post-modern theories as a BAR, Rio de Janeiro, v. 12, n. 4, art.2, pp.348-364, Oct./Dec.2015 www.anpad.org.br/barphenomenology and embodiment paradigm to help to explain how the forge of professional identity is a matter of becoming.More than a profession, craftsmanship is a way of life that seems to have been waned with the advent of industrial society, but -as Sennett (2008, p. 9) states "Craftsmanship cuts a far wider swath than skilled manual labor; it serves the computer programmer, the doctor, and the artist".
It is worth mentioning that we could have chosen the word craftsmanship as a synonym of mastery, as we understand their meanings to actually be really close.So we can highlight that the word craftsmanship connects the substantive craftsman/craftsperson with the suffix -ship, meaning certain status, domain or specific condition that defines a state or condition of being.Relying on that, the intended meaning for the word mastery in this article is one that designates the very own condition of being a master of something.What we want to express through the idea of mastery is the practical domain of a skill, or a mastery that we carry in our bodies and that is refractory to formulation in terms of any system of mental rules and representations.Such skill is acquired not only through formal instruction, in an institutional arena, but also and mainly by routinely carrying out specific tasks involving characteristic postures and gestures in the natural setting of such practice.It also relates to the total field of relations constituted by the presence of the organism-person, indissoluble body and mind, in a richly structured environment (Ingold, 2000).
To achieve the objective of this article, we also need to define (even though in a superficial way) craft among the vast array of human achievements.Initially it may be necessary to shed light on the usages of the word craft and to choose the one that seems to be more appropriate to the considerations about mastery that we want to make here.After all, craft entangles different meanings, shifting from an artistic expression to a range of extremely practical activities, as well as to a system of production.Even though the word craft may have enough representative potential to comprise these and other ideas, we start from a relatively stable definition, which states that craft is a set of knowledge and skills that may be used in a practical way, in order to produce something, usually an object, according to a preformulated aim (Adamson, 2007;Becker, 1978;Risatti, 2007).Such aim is rooted in a craft's traditions and in the desire to perform a purposeful work that are both constituent parts of the master's identity, meaning something he/she deeply understands as an essential part of his/her performance, and also an essential part of him/herself.
In the following sections, we hope to clarify some of the abstract ideas that we have just outlined.The article is structured as follows.First, we define know-how from a phenomenological standpoint to highlight the embodiment paradigm (Csordas, 1990) as an emergent issue for practice-based theories on learning within the field of OS.Next, we delve into the work of the skillful craftsperson in order to connect the idea of someone's embodiment of a skill with the formation of professional identity.Then, we bring up the discussion of previously presented ideas, followed by a brief conclusion that clarifies the intended contributions of this paper and indicates the applicability of the ideas that we have discussed in future empirical studies.
An Embodied Approach on Know-how
The embodiment paradigm (Csordas, 1990) opens up possibilities for understanding culture as embodied experience.As a result, it also allows investigations on how the world constitutes people as selves (Hancock, 2008).Traditionally, studies that take the embodiment perspective have flourished in the anthropological field following two main streams.Firstly, one that approaches the body as a resource for the metaphors that constitute culture (Csordas, 1990).Secondly, one that takes the body within the ongoing process of adaptation to culture (Kleinman & Kleinman, 1991).
According to the assumptions of both streams, many studiesmore or less related to the embodiment paradigm or to a phenomenological perspectivehave developed emphasis on two specific issues.The first one, more related to the first stream, highlights that body movements are the generating principle of a way of somatic knowledge (Ness, 1992;Reed, 1998;Sheets-Johnstone, 1990;Sklar, 1994).
The second one, more related to the second stream, elaborates the understanding that perception rises from a bodily and embodied pre-reflexive knowledge, and that cognition rests upon the environment as a consequence of a process of active engagement (Csordas, 1990;Ingold, 2000;Merleau-Ponty, 2012).
Studies that address the issue of body movement state that somatic knowledge is a resource for embodied communication.They understand that body movement itself produces an authentic way of knowledge that is just as laden with the meanings that people need to accomplish their existences as any other form of knowledge, for example the ones that enable verbal communication (Sklar, 1994).It means that people express their own selves and recognize each other's selves through the habits they accumulate in their bodies as much as with verbalizing words.After all, both are ways of communicating.According to Greiner (2008), many theories that intend to decipher spatial images of the bodysuch as posture schemas, images of the self, images of the body and of othernessstem from the idea that the body has a communicative potential in itself.The understanding of the insides and the outsides of the body is conceived as a relationship between the information that comes from the outside and the feelings that are part of inner processes.
Two points deserve our close attention regarding studies that take body movement as a specific way of knowledge.The first point is that the way through which they understand perception depends on the in-and-out of body flux of certain cognitive processes.These processes impose that the body must be a vehicle for experience and a processor for knowledge, which only becomes clear when it reaches a person's conscience.Extending on this point, it is worth mentioning the work of neuroscientists such as Lakoff and Johnson (1999) and Llinás (2002), who stated that thought is the inner expression of movement.As thought rises from movements, the process of knowing and recognizing the things of the world starts from the sensorimotor system.For Lakoff and Johnson (1980), for example, this is how the metaphors of thought get organized, i.e. the way through which people conceptualize the world and themselves starting from their own bodily experiences.
The second point is that, as these studies seek to recognize a common basis for embodied ways of knowledge within body movement, they tend to get close to an understanding that takes the body as a semantic platform (Sheets-Johnstone, 1990), in a textual way that borders the semiotic paradigm.According to this, the living body is perceived as a kind semantic model that rises from action (Greiner, 2008).Concepts are generated or soar to conscience in the body through actions, and in turn, these concepts also lead to actions.Action feeds back concepts, and concepts feeds back actions in a retroactive cycle.Movement, as a sort of link with meaning, is what gives support to the continuous flux of information between the body and the environment.The way that movement gets ordered in time and space is also the way that the images of the body are built in the flux inside the body (images that begin with thoughts) and outside the body (images that turn into actions), as such images organize themselves in latent embodied processes of communication.Thus, conscience encloses an abstract meaning that comes from the environment via embodiment, which means to say that knowledge comes from outside and then becomes part of the body.
It may remind us of how Mauss (1973) described the acquisition process of a skill (in his own words, a body technic) as a result of a sort of learning that mirrors the social-cultural environment to produce changes in the body.For Mauss (1973), the embodiment process of a body technique is a way of acquisition, inflected by the formalism of the technique.In his own words, "every technique properly so called has its form" and "for every technique there is an apprenticeship" (p.475).Thus, the embodied behavior is consequence of the individual's psychological adaptation, and it is moreover ruled out by formal education or at least by circumstances of collective life.
Relying on this, Mauss (1973) classifies body techniques according to modes of transmission of body techniques, taking embodied learning processes as a synonym of training (indoctrination) and as a socially elaborated way of body conditioning.Beginning with such understanding of body as an instrument, and from the idea of embodiment as training, we can reach the very important notion of dexterity.The meaning of dexterity is frequently mixed up with ability (skill), but Mauss (1973) points out that, by specifically using this word, he wants to refer precisely to physical capabilities.The quotation below may clarify this idea.This is the place for the notion of dexterity, so important in psychology, as well as in sociology.But in French we only have the poor term 'habile' which is a bad translation of the Latin word 'habilis', far better designating those people with a sense of the adaptation of all their wellcoordinated movements to a goal, who are practised, who 'know what they are up to'.The English notions of 'craft' or 'cleverness' (skill, presence of mind and habit combined) imply competence at something.Once again we are clearly in the technical domain (Mauss, 1973, p. 78).
So, by mentioning the Latin origin of the word habilis, Mauss (1973) associates dexterity with the classical definition of skill as technical proficiency (in Greek, tecknē means work of craftsperson, as Mauss (1973) himself emphasized in the quotation).Body dexterity corresponds, though, to functional adaptation to a skill, depicted as an objectified entity close to a tool, or more precisely to a technical object, once "body is man's first and more natural instrument" (Mauss, 1973, p. 75).The idea of a body as instrument or technical object reintroduces Mauss' (1973) theory within the separation between two dimensions of being: the objectivity of body and the subjectivity of psiqué that rules the body.This separation is in complete accordance with the Western concept of individuala concept that Mauss himself and the functionalist sociology of early 20 th century have helped consolidate.
Such understanding may make us look at the body from inside out, as if the source that generates action is located inside of it and its materiality can be reduced to certain mechanical features.Surely the theories on movement have gone much further than Mauss (1973), as they include environmental influence over the body and also recognize an embodied power of agency.But divisions between what is in and out of the bodyi.e. the mind and the world as separated entities -remain implicit in these studies.Mauss (1973) focused his analysis on daily body techniques such as sleeping, waking, walking, running, dancing, jumping, climbing, swimming, etc., and not on the domain of socially established professions.Nevertheless, it is no coincidence that formulations about movement and cognition have emerged in fields where the body is subject to the learning of a technique, such as dance (Desmond, 1997;Ness, 1992;Sheets-Johnstone, 1990), dramaturgy and performance (Jeudy, 2002;Solso, 1994).
This learning through the body is not explicitly related to professions, as the institutional basis of it is formal, i.e. cognitive learning.The core of modern professionalism is a kind of learning process that is strongly detached from practice, more centered in the mind than in the body, as well established professions cannot "simply [depend] on 'craft' factors in the learning of techniques and skills" (Jackson, 1970, p. 4).Within the field of sociology of professions, the approach on the learning body has driven an emphasis on formal education as prerequisite for the constitution of professional identity (Rodrigues, 1998).In this field of research it is considered that one can reach the skill level of the working body through formal training in a work technique, modulated to fit an educational environment that has, of course, connections to the real environment of professional performance, but is but a simulation of it.This field is also deeply concerned with the process of institutionalization of professions, namely division of work related to professional status and social arrangement of practices, in a way that formal professional education plays its role in tacitly indoctrinating bodies to perform a work practice or an occupation that may fit the needs of the structural dynamics of a capitalist system of production.This helps to explain why professionalization and work organization have been set apart from reflections about the body and, specially, why "the study of the body has tended to become estranged from the study of work just as analysis of organization has been abstracted from the body" (Hassard, Hollyday, & Willmott, 2000, p. 2).Within the field of OS, the issue of body and organization may be illustrated by a number of dimensions that abut a discussion about body, work performance and professionalization without directly approaching it.A good example comes from the study by Hindmarsh and Pilnick (2007) on the nature of embodiment in the workplace of teamwork in preoperative anesthesia.In exploring an alternative way of examining the body in OS, that study intended to show how competent organizational members display intercorporeal knowing, that is practical knowledge of the work they perform together.Even though the authors paid close attention to the normative notions of a medical team, they did not take the standpoint of considering the relations between the embodiment of work in the context of the normative/institutional directedness of medical professions.Other studies focusing on embodiment in the context of work teams and/or professional groups also turned attention to the importance of normative features as backbones for the learning processes that happens in the body and by it while people perform working practices together or in the same environment (Almeida & Flores-Pereira, 2013;Llewellyn & Hindmarsh, 2013;Mirchandani, 2015;Rosa & Brito, 2010;Styhre, 2004;Tuncer, 2015;Yakhlef, 2010), but again without any specific interest in how work itself is linked to a network of embodied processes that ultimately shape professional identities and vice-versa.
Actually, even though the studies that we have just mentioned do not adhere to an institutional approach, the concept of professionsin the way that the field of OS has inherited from the field of sociology of professionsmay be an obstacle for the flourishing of an embodied approach on professional identities.It is a common point in OS the understanding that more so than other types of social actors, the professionals in modern society have assumed leading roles in the creation and tending of institutions, as well as certain pattern of behaviors.This supports the assumption that professions are directly linked to institutional agents that may attempt to create general cultural-cognitive frameworks; or to devise normative prescriptions to guide behavior; and to still exercise coercive authority (Scott, 2008), thereby substantiating the idea of professional identity according to a structural-normative paradigm.
On the other hand, studies that emphasize the perception experience as originating from corporeally embodied pre-reflexive knowledge seek to understand embodiment of knowledge over and above the duality of what is in and out of the body.These studies advocate the phenomenological perspective of living-in-the-world, which means taking body (indistinctly mind and flesh) in complementarity and continuity.According to Ingold (2000), for example, we could refer to an embodied mind as much as to an enminded body, because both are entwined.Such an understanding about the body also brings a new light on perception within the field of OS.For example, the work of Sandberg and Dall'Alba (2009) take the entwinement of mind and body as a reflexive standpoint to understand how practice is constituted, and to explore the role of the body in the performance of organizational phenomena.Similarly, by utilizing the practical approaches that do not regard knowledge as something possessed, but as part of a practical engagement with an organizational performance, scholars have advanced our understanding of how organizational knowledge is produced, learned, sustained, performed and developed through everyday work practices (Gherardi, 2006(Gherardi, , 2009;;Nicolini, Gherardi, & Yanow, 2003;Sandberg & Dall'Alba, 2009).Among the different definitions that are gathered together under the label of Practice-Based Studies (PBS), the body is central to those that explore practices "from within", "from the point of view of practitioners and the activity that is being performed, with its temporality and processuality, as well as the emergent and negotiated order of the action being done" (Gherardi, 2009, p. 117).From this definition it follows that knowing is a situated activity that relies on the body and that knowing in practice is always a practical accomplishment (Gherardi, 2009).
With this definition in mind, we shall now turn to the subject of how the immediate experience of perception is deeply influenced by embodied social structures that are part of the world that we live in (Csordas, 1990).By doing that, we will take one more step to elaborate the way through which those experiences relate to the formation of self and professional identity.
Living Body and the Context of Practices
Again, we resort to Ingold (2000) who had articulated the embodied paradigm (Csordas, 1990) and phenomenology (Merleau-Ponty, 2012)in order to understand how the presence of the live body in the world constitutes the person.Going way further, Ingold falls back upon psychology, via its ecological branch (Gibson, 1974) to outline a phenomenology of dwelling that could include practices of life in context.Ecology and holism are key references in Bateson (1987) and they are deepened by Ingold (2000), along with Merleau-Ponty's (2012) influence.According to Ingold (2000), ecology of life should deal with the dynamics of organism plus environment as a whole, with no distinctions between body, conscience and context.
Apart from revealing knowledge that is out there in the world, life is an ongoing process in which "Every living being, then, emerges as a particular, positioned embodiment of this generative potential" (Ingold, 2000, p. 51).That is the same of saying that experience with or in the environment supplies formation of conscience of the organism-person about him/herself, his/her attitudes and orientations regarding the world.It means that experience cannot be taken as a mediator between mind and nature, since both domains are not separated (Bateson, 1987).Instead, experience should be taken as intrinsic to the ongoing process of being alive in the world, or to the whole involvement of organism-person with environment.Experience though is about ontology of engagement.Now, taking the metaphor of craft, and the concept of mastery as reflexive issues, we acknowledge that since the main feature of the craftsperson is the practical domain and the fine execution of the objects that he/she aims to create (Dormer, 1994;Sennett, 2008), it is no longer possible to describe it without understanding the way he/she interacts with the world.Within the field of management, Fischer (2012) put forward a definition of mastery as the dominance of a field of knowledge and practice.In other words, it entangles a conceptual and practical structure, i.e. a field that is ordered by its own structure of practical knowledge.Such structure is culturally given and comes into being through rites of passage that ensures its maintenance and renovation.So, the description of ability that reaches the concept of mastery that we have just mentioned suggests that the context also has a crucial importance, as it is the basis for the practice and the space/place for raising knowledge about such practice.
In putting perception under this light, we can cite Merleau-Ponty (2012) when stating that the world can never be separated from the person who perceives it, as every perception is a communication or a communion that enables the mating, meaning a deep and almost sexual fusion, of our bodies with things.As this mating forms us as persons, it also forms the environment that gets permanently changed after our presence.So the world is the ambiance where the entwinement of persons and things originates practices of living.This entwinement is a symbiosis, but not a fusion, which is the same as saying that the person could not live without the world, but both could actually be described with certain independence from one another.
Such assertion about perception leads us to deny the idea that the process of knowledge embodiment could be described as acquisition.Understanding this process takes restoring the human organism in the original context of engagement with its surroundings.When talking about achieving the level of skillin fact, the subject of this articlethis way of defining the learning process reverberates in social ways of transmitting a practical know-how.Traditional models for analyzing social learning processes tend to separate the process of transmission of embodied knowledge in two phases: first, it takes attention to see and understand a particular technique; then it takes turning this initial attention in a kind of mental map to perform the technique.So before practicing, "a generative schema or program is established in the novice's mind from his (sic) observations of the movements of already accomplished practitioners" (Ingold, 2000, p. 353).And after acquiring it, "the novice imitates these movements by running off exemplars of the technique in question from the schema" (Ingold, 2000, p. 353).
It is undeniable that learning a skill involves observation and imitation, to put it in simple terms, but it is questionable that observation should be taken as a mental process as much as a perceptual one, because it leads to metaphors, mental images of real perception.The same happens to imitation, that could end up being a representation of practices, as if the practitioner was performing a social role.The idea of practical engagement with the world implies observation and imitation, when seeing and perceiving the work of a master enables the apprentice to experience the perceptual engagement with the environment.The key to observation and imitation, though, is when the apprentice reaches the conscience of his/her own body and of the world that involves it while observing and imitating.Undertaking practice and guided by his/her observations, the apprentice captures the sensation that things may have to him/her, which means that this person "learns to fine-tune his own movements so as to achieve the rhythmic fluency of the accomplished practitioner" (Ingold, 2000, p. 353).
In summary, Ingold (2000) lists five critical dimensions of any kind of skilled practice with reference to the body.First, intentionality and functionality are immanent in the practice itself, rather than being prior properties, respectively, of an agent and an instrument.Secondly, skill is not an attribute of the individual body in isolation but of the whole system of relations constituted by the presence of the artisan in his or her environment.Thirdly, rather than representing the mere application of mechanical force, skill involves qualities of care, judgment and dexterity.Fourthly, it is not through the transmission of a formula that skills are passed from generation to generation, but through practical, hands on experience.Finally, skilled workmanship serves not to execute a pre-existing design, but actually to generate the forms of artefacts.
What needs to be highlighted, though, is that observation and imitation are part of the relation between the apprentice and his/her surroundings.Such relation has a social dimension, as it is supposed to happen among a community of practice or through the dialogical contact with the master.In both cases, the practitioner is not a self-constrained individual, rather the opposite.Elaborations on practice are not kept in his/her mind, but they are forged via social contacts that surely are part of the environment.So the temper of the practitioner is also something that rises from outside to inside, and not the opposite.Nevertheless, observation and imitation are part of a broader idea of embodied attention that have an after-effect on personality and identity.The apprentice becomes an expert not because he/she pays attention to play-acting of practical knowledge, but because he/she can decline to use such kinds of objectified stunts that equal acquisition of knowledge.About learning through imitation, which means perceiving, understanding and performing in a practical way, it is worth noting that even though they are performing similar social roles, master and apprentice are different persons and the process of learning and transmitting knowledge between them entangles conflicts and contradictions, as perceptions and abilities derive from personal experiences.It means perceptions and abilities are social productions in dispute and the content and the cultural meaning of the practical knowledge are under the judgment of what to reproduce, how, for what reason, and to the behalf of whom.Such processes occur based on social struggles that manifest on and through the body, and that may reproduce the traditional structure of craft practiceas a cultural assetwhile it may change it according to embodied social features of the practitioners (Bourdieu, 2000).It also highlights the evidence that the body is a social, historical construction that carries on disputes in itself because it is a political agent (Foucault, 2009), as in discussions of gender and race.Lave (1997) have called understanding in practice, as opposite to a culture of acquisitionthe last one referring to learning theories that have been privileged for a long time by cognitive sciences and western educational institutions.Learning as acquisition, which implies mentally internalizing knowledge under the representational rules and schemas, is something detached from practice.Differently, understanding in practice is a process in which knowing is inseparable from doing and continuity between both actions is a pre-reflexive process of engagement with the world.In accordance with this theory of knowledge, such embodied know-how is the most powerful way of knowing, because the person him/herself becomes the knowledge that have been learned (Lave, 1997).
Practical space of the body and processional identity that rises from practice
Even though discussions undertaken in this article may be too theoretical and even quite abstract to fit reality of work in organizations, we consider it relevant because few previous works have attempted to understand professional identity from the standpoint of an embodied paradigm within OS.The issues of identity related to the body have especially emerged in gender and diversity studies (Christie, 2006;Gherardi & Poggio, 2001;Martin, 2001Martin, , 2003) ) and in studies that seek to analyze disciplinary power relations within organizational dynamics (Collinson, 2003;Fleming & Spicer, 2003;Flores-Pereira, Davel, & Cavedon, 2008;Hodgson, 2005).Albeit these are invaluable contributions to the study of body and embodiment on organizations, its emphasis on power relations and on the idea that bodies are shaped and produced in order to be efficient, regulated and docile sometimes blurs the analysis of transformation processes of the body from a phenomenological standpoint.
By that, the becoming of the body itself is kept under interpretative analysis of social-political context where those transformations occur.Recognizing how relevant the objectives and motivations of researchers that have taken those paths are, it is worth mentioning that studies on embodiment are in need of methodological tools that could enable the investigation of movements, sensations, perceptions and changes of the living body.Maybe that is why approaches that seek to explain bodies from the perspective of gender, diversity and power relations are more common, as they can rely on representations and mental elaborations about the object under study.
We do not want to deny or diminish the idea that the body is practically as much as theoretically a political matter (Foucault, 2009).But our effort in this paper embraces the centrality of corporeal experience to explain the developing of knowledge, self and identity.The phenomenological approach, as well as the embodiment paradigm that tags along with it, seek to oust the body from its usual objectified condition and to privilege experience as the starting point for knowing the world.Which means that we should try to capture experience in its immediacy, as it is a legitimate way for capturing reality and producing knowledge about it.Cultural issues regarding politics of and with the body are implicit to a phenomenological approach, as it assumes that experience is always backed by culture, once it can be pre-objective but never pre-cultural (Merleau-Ponty, 2012).Yet, the fact that such political issues shape perception is important to distinguish the experiences that result from the immediate contact of the body and the world, as well as to qualify the person that experiences it.Both assertions can be taken as important guidelines for methodological concerns on the study of embodiment processes, admitting that political issues are not crucial for capturing experience but for interpreting and explaining it.
Taking political discussions about the body under consideration, we observe that a small but relatively diverse group of studies are emerging in the field of OS.These studies tend to talk about the becoming of body within organizational context, mentioning identity more or less clearly as a subjective instance that rises from socio-material arrangement of work (Barzin, 2013;Parolin & Mazzotti, 2013;Viteritti, 2013).These studies seek to problematize professional identity as a product of knowledge and skill embodiment, while they attempt to describe it as a process.The discussions that each of them undertake deserve to be briefly detailed here.
First it is worth mentioning Parolin and Mazzotti's (2013) work.Even though it does cite the word identity, this concept is clearly identifiable in the description of the two craftsmen whose workplaces and work practices are taken as empirical objects of the study.Based on Actor-Network Theory, those authors propose a model to describe knowledge that is put into practiceworking knowledge, as they callin the bodily interaction or worker and workplace.Parolin and Mazzotti (2013) highlight the importance of social-material arrangements for the building of knowledge that emerges from selective translations and they indeed dedicate a whole section of their article to theorize about education of the senses through professional practices.They state that professional practical knowledge is learned through process of situated learning and it is distributed equally between human beings, artifacts and material and linguistic systems of classification under use in the workplace.Although emphasizing the social dimension, as they reaffirm the importance of communities of practice for professional learning, Parolin and Mazzotti (2013) do not intend to develop an analysis of subjective processes that undertake the embodiment of professional knowledge and they also allude to professional learning as part of the identity formation process.
Similarly, Viteritti (2013) also develops analysis of knowledge for a professional group of scientists but not giving much importance to identity issues among them.The author investigates three episodes of knowledge embodiment by scientists in their laboratories, seeking to describe how body achieves mastery in laboratory practices.While not developing the concept of mastery, Viteritti (2013) uses it to explain the degrees of selective skill and professional competence that are necessary to the laboratory practices.Through the presented cases, Viteritti (2013) concludes that learned and embodied BAR, Rio de Janeiro, v. 12, n. 4, art.2, pp.348-364, Oct./Dec.2015 www.anpad.org.br/barpractices are produced by gradual efforts of the body in the course of actions, through daily immersion in the social-material environment of the practical learning.
In Barzin's (2013) study, worker's gestures are understood as recursive patterns and routines of body movement.Three instances of corporealitytechnical, aesthetical and embodimentare taken as generative elements of organizational gestures.According to empirical research findings conducted between workers organized in a production line, professional and personal identity merge with embodiment when technique is perfectly mastered and transforms itself in elegance in executing movements.For this author, in a certain point during the process of embodiment of working gestures, mastering merges technique and aesthetic aspects of gestures and work tools so deeply in the corporeality of the practitioner, that techniques, gestures and tools altogether become part of his/her body.Elegance appears, though, for the external observer of the gesture and it looks absolutely spontaneous, as the execution of complex body movements was easy to achieve.Elegance in the gesture is an identity trait of the practitioner, as it is original and carries the signature that makes the unique style of one person recognizable.
Each of the three studies that we just mentioned seek to clarify how embodiment processes of knowledge/skill are related to what we tried to define as mastery.In all of them, we can notice that the notion of mastery is central to characterize the way through which practical relation with knowledge transforms the bodythat is indistinctively both object and subject of action.In this sense, reaching the skill level that characterizes mastery implies deep changesin the body, as much as in the selfthat reflects on the very being.Within OS, the mentioned works are relevant, especially when we discuss them side by side with theoretical articulations that we have developed in this article, because they enable the study of professional identity from an embodied dimension (Barzin, 2013).They also propitiate the analyses of social-material arrangements that are part of the skill embodiment process (Parolin & Mazzotti, 2013;Viteritti, 2013) and that are inseparable from it, as we have stated in this article, resorting to a holistic phenomenological perspective (Ingold, 2000;Merleau-Ponty, 2012).
Methodological issues in the study of embodied knowledge
Besides this theoretical gap, we also want to address the need to discuss different paths that may guide empirical studies about body changing processes under craftsmanship and mastery.Taking the phenomenological perspective of Merleau-Ponty (2012), we suggest that the focus of analysis should be the practical space of the body, in order to grasp the understanding of fundamental relations between body and space.As we have said before, Merleau-Ponty (2012) talks about the movement of being in the world and how situations that bring up body movements are not entirely articulated, i.e. its meaning is not entirely recognizable.Such operations are lived as open situations that invite experimentation and bodily recognition to provide practical meanings to the things of the world.These are the kind of knowledge relations that the body is able to face in a reflective way.About this, it is important to state that reflective movements open up the meaning of a situation as they are driven to objects, while perception itself is the intention of meaning that relies on pre-objective view of what we call being-inthe-world.
While discussing spatiality of body, Merleau-Ponty (2012) highlights that the borders of the body are a frontier through which ordinary spatial relations overcome themselves.It happens because parts of the body and of space get intertwined in each other in an original way, as they get mixed and juxtaposed.Thus, in what concerns spatiality, body is something else that is not exactly a figure, nor the background.Every figure lines up under the double horizon of exterior space and bodily space, and both are forming the practical system of the body plus environment.
In this sense, Merleau-Ponty (2012) proposes that we recover the associationist notion of body schema.Body schema sets up a global conscience of one's posture in the sensorial world.It is not the conscience about existing parts of the body, but its whole integration and its engagement in the project of life or organisms.Body schema is dynamic and it brings to light a new kind of existence that is also circumstantial and deeply oriented towards bodily responses to environment changes.Still according to Merleau-Ponty (2012), the idea of body schema is a way of grasping meaning from practical action, making every single movement of the body acquire a sense related to the very aim of the body in performing those actions.In this way, body schema is a way of expressing that the body actively exists in the world.Thus, spatiality of the body is not a spatiality of position, but rather a spatiality of situation, which designates the anchorage of the active body as an object, in face of practical activities.Such anchorage comprises a rich comprehension of the environment where practices take place (Ingold, 2000) and it may compel restituting social conflict issues regarding embodiment processes as a way to understand how the body is, simultaneously, subject of perception and object in the context.It also allows to discern how both positions are entwined and feed one another, as ways of perceiving are coupled with the possibilities of experiencing aspects of the world that are cultural, social and historical.To apprehend and understand practical activities, though, we need to again draw borders between body and the world in order to track directions, to establish lines of power and elaborate perspectives.Summing up, it is necessary to reorganize those limits according to projects and objectives, relations of neighborhood and familiarity, in order to understand how the inner activity of the body transcends to the environment.It is also necessary to invert the natural relation between body and surrounding so we can understand human work as part of thickness of being (Merleau-Ponty, 2012).
Body appears simultaneously as a posture and as an object, which opens up different possibilities for research.By this we are saying that research on embodiment of skills should try to understand and depict body schemas and it's surroundings in relation to each other.The main challenge for research on this issue is that we need to catch the meaning of action and transcript it to a general comprehension, so the unspeakable features of experience have to gain an objectified and even representational dimension.
A way to do it may be via Actor-Network Theory, as Parolin and Mazzotti (2013) have demonstrated.Another could be through ethnographic research, when the researcher him/herself experiments this relation in his/her own body.
About that, we could mention the study of Almeida and Flores-Pereira (2013) on the work of ballet dancers in a dance company.Based on ethnographic research, this article describes in a very accurate way how the identity of professional ballet dancers is inscribed in people's bodies while they train and perform for the dance company.With a view to investigate the embodiment of ballet dancers' work, one of the authors of the study engaged in a professional ballet company using her own embodied resources to observe how ballet dancers experience space, time, weight and strength, as well as how they sense body capacities, in a kinesthesic way.Almeida and Flores-Pereira (2013) call their research approach "embodied ethnography" (p.727), in an attempt to differentiate and to give a special qualification to the way researchers should use their body, senses and emotions as methodological sources to get an embodied (Flores-Pereira et al., 2008;Hindmarsh & Pilnick, 2007;Sinclair, 2005) and emplaced (Pink, 2011) comprehension of field data.The concept of emplacement, which is particularly important for Almeida and Flores-Pereira's (2013) study, ascribes the multisensoriality entangled in the relation researcher/researched in the environment, taking a social, cultural and historical dimension of it.
Even if Almeida and Flores-Pereira (2013) do not intend to situate issues related to identity/corporeality in the social-material arrangement of work within organizations, their work shows processes of embodiment of discipline and hierarchy of power of the dance world, as one of the authors joined the company for six months (it is worth mentioning she had previous experience in professional dance).In the data analysis, the authors reach conclusions that approach the previous findings of another ethnography of the body, conducted in the cultural field of professional ballet by Wainwright and Turner (2004a), for enabling the understanding of the balletic body as a series of cultural practices.In methodological terms, Wainwright and Turner (2004a) use ethnography to challenge the disembodied literatures on dance, specifically on ballet, that tend to adopt a post-modern reading of dance as text (Adshead-Lansdale, 1999;Desmond, 1997;Goellner & Murphy, 1995;Wainwright & Turner, 2004b).In theoretical terms, the authors assert that ballet is a social practice whose embodiment's process is related to distinctions in the ballet habitus that unfolds in three dimensions: an individual one, an institutional one and a choreographic one.By taking the concept of habitus of Pierre Bourdieu (1984 as cited in Wainwright & Turner, 2004a) as a fruitful approach to both theory and research on the body, Wainwright and Turner (2004a) link body and professional identity in accordance with a post-structural perspective that broaches previous discussion about sociology of professions, as they eventually focused on professional identity as an embodied process that expresses the interrelationship between individuals and institutions, body and society.Almeida and Flores-Pereira (2013), in their turn, center their attention in how ballet identity is embodied through pain and extreme technical demand and also how aging and physical changes of the ballet dancer's body is a threat to a dancer's identity.From a methodological point of view, what is new about their work is the use of non-representational ethnography for researching ballet dancers as professionals that depend on their bodies to work just like any other professional class.The choice of ballet dancer's work as research field substantiates what Chandler (2012, p. 865) has called "the use of dance analogy as heuristic device in ethnographies of work".Ballet dancers were chosen by Almeida and Flores-Pereira (2013) as obvious examples of the embodiment of professional identity, but such an embodiment process could be investigated in any profession, even those that seem more mental than physical, such as the work of an accountant for example.Nevertheless, the nature and variety of dance can be explored as a way of studying movement, gendered embodiment, audience, emotion and rhythm at work (Chandler, 2012).In theoretical terms, the main contribution of this paper is the way it relates those embodiment processes to the social-cultural and historical background of battle practices and the organizational culture of ballet companies.Again, the institutional context of ballet professional practice is the background for the understanding of the embodiment process of professional identity.While such an approach has much in common with classic studies of work, i.e. sociology of professions, its ethnographical approach also unsettle some of the dominant ways of researching work and its organization.
Conclusion
In this article, we choose to make an analogy with craft to elaborate a contribution to the embodiment studies in OS.As Wainwright and Turner (2004a), Chandler (2012) and Almeida and Flores-Pereira (2013) have taken dance as a useful heurism to study work from a practical and embodied perspective, we assumed that the concept of mastery could be particularly useful to guide studies on transformation processes of the body.As cognition that emerges from practice in a polished, sophisticated and beautiful way, mastery highlights not only embodied processes, but also the social organization of knowledge entangled through practice and the singular configuration of factors that help build work identities.It goes beyond the social division of labor, even though it can reinforce it.Thus, the theoretical contribution of this paper is to reinforce the concept of mastery as a specific way of understanding skill, one that results from knowledge embodiment and that changes the whole person.It also highlights a way of understanding work identity as an embodied matter, countering the tendency to explore such topic from a structural-functionalist viewpoint.Accordingly, the originality of this contribution relies on the theoretical background that we have chosen to support it, which is the embodiment paradigm and the philosophic viewpoint of phenomenology.By doing this, we also contribute to the debates around and about the limits of body and self within OS, taking a practical standpoint.The concept of mastery, which comes from the metaphor of craft, also situates this paper in a post-modern or a non-modern perspective on the study of learning through practice in OS.
In methodological terms, the concept of mastery may embrace some unspeakable aspects of the knowledge embodiment process, serving as a representational source for researchers that investigate working practices from a phenomenological standpoint that aims to explain inexpressible features of know-how.Understanding, for example, that mastery is related to elegance in gestures of the skilled worker (Barzin, 2013), allows us to elaborate aesthetic parameters to understand and to explain the nature of work that is made by the person that becomes a master in his/her professional field.Analogously, paying attention to social-material and, also, historical dimensions of certain practices is something that needs to be done in order to explain the richly structured environment (Ingold, 2000) where practices take place.Findings that came from theoretical reflections launched in this paper should be supported by empirical research.It is also worth mentioning that the concept of mastery and the analogies with craft that have framed the undertaken reflections are non-modern and, as such, quite distant from the organization of work in the contemporary capitalistic world.It would be necessary to take it as a metaphor for contemporary work, with the abstractions and adaptations that this may imply.We conclude this article asserting the importance to understand the body as a whole of subject-object and likewise that it is necessary to place the body in the environment in order to understand practical learning process. | 11,312 | 2015-12-01T00:00:00.000 | [
"Art",
"Philosophy"
] |
Inference in economic experiments
Replication crisis and debates about p-values have raised doubts about what we can statistically infer from research findings, both in experimental and observational studies. With a view to the ongoing debate on inferential errors, this paper systematizes and discusses experimental designs with regard to the inferences that can and – perhaps more important – that cannot be made from particular designs. JEL B41 C18 C90
Introduction
Starting with CHAMBERLIN (1948), SAUERMANN and SELTEN (1959), HOGGATT (1959), SIEGEL andFOURAKER (1960), andSMITH (1962), economists have increasingly adopted experimental designs over the last decades. Their motivation to do so was to obtaincompared to observational studiesmore trustworthy information about the causalities that govern human behavior. Unfortunately, it seems that in the process of adopting the experimental method, no tightly inference-focused systematization of economic experiments has emerged. Some scholars use randomization as the defining quality and equate "experiments" with "randomized controlled trials" (ATHEY and IMBEN 2017). Despite ensuing changes in the nature of feasible inferences, other researchers include non-randomized designs into the definition as long as behavioral data are generated through a treatment manipulation (HARRISON and LIST 2004). One might speculate that economists tend to conceptually stretch the term "experiment" because the seemingly attractive label suggests that they have adopted "trustworthy" research methods that are comparable to those in the natural sciences. Whatever the reason, confusion regarding the different types of research designs that are labeled as experiments entails the risk of inferential errors.
The inferences that can be made from controlled experiments based on the ceteris paribus approach where "everything else but the item under investigation is held constant" (SAMUELSON and NORDHAUS 1985: 8) are different from those that can be made from observational studies. The former rely on the research design to ex ante ensure ceteris paribus conditions that facilitate the identification of causal treatment effects. Observational studies, in contrast, rely on an ex post control of confounders through statistical modeling that, despite attempts to move from correlation to causation, does not provide a way of ascertaining causal relationships that is as reliable as a strong ex ante research design (ATHEY and IMBEN 2017). But even within experimental approaches, different designs facilitate different inferences.
In this paper, we address the question of statistical and scientific induction and, more particularly, the role of the p-value for making inferences beyond the confines of a particular experimental study. We aim at an adequate differentiation of experimental designs that contributes to a better understanding of the inferences that can andperhaps more importantthat cannot be made from particular designs. For the sake of simplicity, we limit the discussion of treatment comparison to binary treatments.
Experiments aimed at identifying causal treatment effects
The label "experiment" is first of all used for studies that, instead of using survey data or pre-existing observational data, are based on a deliberate intervention (treatment) and a design-based control over confounders. Identifying the effects of the treatment on the units (subjects) under study requires a comparison; often no-treatment observations are compared to with-treatment observations. Two different designs are used to ensure control and thus ceteris paribus conditions: (1) Randomized controlled trials rely on a between-subject design and randomization to generate equivalence between compared groups; i.e. we randomly assign subjects to treatments to ensure that known and unknown confounders are balanced across treatment groups (statistical independence). (2) Non-randomized controlled trials, in contrast, rely on a within-subject design and before-and-after comparisons; i.e. we try to hold everything but the treatment constant over time and compare the before-and-after-treatment outcomes for all subjects who participate in the experiment. 1 The persuasiveness of causal claims depends on the credibility of the alleged control. Comparing randomized treatment groups is generally held to be a more convincing device to identify causal relationships than before-and-after treatment comparisons (CHARNESS et al. 2012). This is due to the fact that randomization balances known and unknown confounders across treatment groups and thus ensures statistical independence. 2 In contrast, efforts to hold everything else but the treatment constant over time in before-and-after comparisons are limited by the researcher's capacity to identify and fix confounders. A particular threat to causal inference arises when subjects' properties change through treatment exposure. That is, holding "everything" but the treatment constant over time can be difficult because sequentially exposing subjects to multiple treatments may cause order effects that violate the ceteris paribus condition (CHARNESS et al. 2012). However, as CZIBOR et al. (2019 emphasize, within-subject designs also have their advantages: besides the fact that they can more effectively make use of small experimental groups, they facilitate the identification of higher moments of the distribution. Whereas betweensubject designs are limited to estimating average treatment effects, within-subject designs enable researchers to look at quantiles and assess heterogeneous treatment effects among subjects. Due to the particular credibility of randomization as a means to establish control over confounders, the use of the term "experiment" -accompanied by the label "natural"has even been extended to observational settings where, instead of a deliberate treatment manipulation by a researcher, the socio-economic or natural environment has randomly "assigned treatments" among some set of units. Regarding this terminology, DUNNING (2013: 16) notes "that the label 'natural experiment' is perhaps unfortunate.
[…], the social and political forces that give rise to as-if random assignment of interventions are not generally 'natural' in any ordinary sense of that term. [… and], natural experiments are observational studies, not true experiments, again, because they lack an experimental manipulation. In sum, natural experiments are neither natural nor experiments" but may be structurally close to randomization. 3
Inferences in experiments based on treatment comparisons
Sharing the essential approach of providing for an ex ante, design-based control over confounders through the introduction of a well-defined treatment into an otherwise controlled environment, randomized-treatment-group comparisons and before-and-after-treatment comparisons facilitate causal inferences. The meaning of statistical inference and the p-value, however, are different in the two cases. In randomized-treatment-group comparisons, the p-value linked to the treatment difference is usually based on the approximation of the randomization distribution (cf. RAMSEY and SCHAFER 2013), i.e. the distribution of the difference between group averages and the standard error used in a two-independentsample t-test. Regardless of how participating subjects were recruited, the resulting p-value targets the following question: when there is no treatment-group difference, how likely is it that we would find a difference as large as (or larger than) the one observed when we repeatedly assigned the experimental subjects at random to the treatments under investigation (VOGT et al. 2014: 242). In randomized controlled experiments, the evaluation of internal validity and causal inference can be aided by statistical inference based on the p-value, which represents a continuous measure of the strength of evidence against the null hypothesis of there being no treatment effect in the group of experimental subjects. While scientific inferences beyond the confines of the experimental group under study are often desired, it must be recognized that randomization-based inference is no help for generalizing from experimental subjects to a broader population from which they have been recruited. Using statistical inference to help make such generalizations would require that, besides being randomized, the recruited experimental subjects had been randomly drawn from a defined parent population. If they are not, extending inference from the experimental subjects to any broader group must be based on scientific reasoning beyond statistical measures such as p-values. This implies accounting for contextual factors and the entirety of available knowledge including external evidence for the phenomenon under study. 4 When we not only randomize a given group of experimental subjects but also recruit them from a defined parent population through random sampling, the question arises of how to link randomization-based inference, which is concerned with internal validity and causality, to sampling-based inference, which is concerned with external validity and generalization towards the broader parent population. The "true" standard error of the randomization distribution would reflect the idea of frequently re-randomizing a given group of, let's say, n =100 subjects in hypothetical experimental replications. The standard error in a two-independent-sample t-test, in contrast, presumes that we repeatedly draw random samples of n = 100 subjects from a population before carrying out the randomized experiment. As stated above, two-sample t-tests are often also used for causal inferences from randomized-treatment-group comparisons even though they are conceptually based on random sampling from populations. If we accept the sampling-based standard error as an approximation of the randomization-based standard error (ATHEY and IMBEN 2017)it is an upwardly-biased approximation because it considers sampling error in addition to randomization errorthe resulting p-value can be used as an aid for simultaneously assessing internal and external validity. One should always be explicit about the fact, however, that the interpretation of the p-value must be strictly limited to causal inferences within the given group of experimental subjects when the group of experimental subjects was not recruited through random sampling.
Contrary to randomization, a p-value associated with the treatment difference in before-and-after-treatment comparisons is conceptually per se based on random sampling and the sampling distribution, i.e. the distribution of the average individual before-and-after difference and thus the standard error in a paired t-test. This is just another label for a one-sample t-test on the variable "individual before-andafter differences." Statistical inference based on the one-sample p-value implies that we concern ourselves with the question of what we can learn about the population mean from a random sample. In other words, we are asking the following question: assuming there is no difference in the population, how likely is it that we would find an average before-and-after difference as large as (or larger than) the one observed if we carried out very many statistical replications and subjected repeatedly drawn random samples to the same treatment procedure. Therefore, our p-value is a continuous measure of the strength of evidence against the null of there being no treatment effect in the parent population. While being an inferential tool to help make generalizations from the sample of experimental subjects to a broader population (external validity), it must be recognized that a p-value in before-and-after comparisons is no help whatsoever for assessing causality. Instead, causality claims hinge on the credibility of the ceteris paribus claim and must be based on transparent experimental protocols that show what exactly researchers did to hold everything but the treatment constant over time. A p-value in a one-sample t-test informs us about the random sampling error, irrespective of whether our experimental procedure was successful in holding everything but the treatment constant over time or not. The only important assumption is that the treatment that leads to the observation of individual before-and-after differences presumably remains unchanged over all statistical replications. One should be clear that there is no role for a p-value when subjects in before-and-after-treatment comparisons are not randomly recruited.
Being a probabilistic concept based on a chance model (i.e. a hypothetical replication of a chance mechanism), p-values are not applicable if there is no random process of data generation (either randomization or random sampling). When there is no randomization, maintaining the p-value's probabilistic foundation therefore poses serious conceptual challenges when we already have the data of the whole target population (DENTON 1988: 166f.). An example is an experimental within-subject design where experimental subjects are clearly a non-random convenience sample, or where we do not want to generalize beyond the confines of the particular sample to start with. In such cases, the sample already constitutes the finite population to which we are limited. Due to the lack of a chance mechanism that could hypothetically be replicated, there is no role for the frequentist p-value and statistical significance testing. The fact that there is no room for statistical inference when we already have data of the entire inferential target population is formally reflected in the finite population correction factor. Rather than assuming that a sample was drawn from an infinite populationor at least that a small sample of size n was drawn from a very large population of size Nthe finite population correction factor (1-n/N) 0.5 accounts for the fact that, besides absolute sample size, sampling error decreases when the sample size becomes large relative to the whole population. The correction reduces the standard error and is commonly used when sample share is more than 5% of the population (KNAUB 2008). Having the entire population corresponds to a correction factor of zero and thus a corrected standard error of zero.
If p-values are nonetheless calculated for entire populations (or non-random samples for that matter), one would have to imagine an infinite "unseen parent population" (or "superpopulation"), i.e. an underlying stochastic mechanism that is hypothesized to have generated the observations in the observed sample. DENTON (1988) critically notes that this rhetorical device, which is also known as "great urn of nature," does not evoke wild enthusiasm from everybody. "However, some notion of an underlying [random] processas distinct from merely a record of empirical observationshas to be accepted for the testing of hypotheses in econometrics to make any sense" (DENTON 1988: 167). We would add that researchers who resort to the p-value in such circumstances should explicitly explain why and how they base their inferential reasoning on the notion of a superpopulation. When doing so, they should be clear that this notion does not facilitate statistical inference in the conventional sense of generalizing towards a numerically larger parent population. Instead, inferences would be limited to the unseen superpopulation in terms of a random process that is supposed to "apply" to only and exclusively the subjects who happen to be in the sample.
Inferences in experiments without treatment comparisons
In experimental treatment comparisons, the term "control" means first of all generating ceteris paribus conditions (ex-ante control over confounders) with the objective of identifying causal treatment effects. We know that this ex-ante control comes in two forms: in randomized-treatment-group comparisons, control over confounders is achieved without exercising control over the environment; i.e. randomization, which balances confounders (including unknown ones) across treatment groups, replaces environmental control. In before-and-after-treatment comparisons, in contrast, control over confounders requires that we exercise control over the environment and fix and maintain all factors that could influence subjects' behaviors besides the treatment under investigation.
Often, economic experiments do not settle for identifying causal treatment effects among experimental subjects in more or less artificial experimental environments. Instead, experimenters want to learn what governs the behaviors of certain social groups in relevant real-world contexts and, eventually, how policy interventions would work in these contexts. This requires not only going beyond internal validity and causality. It also requires moving external validity beyond statistical inference, which is solely concerned with random error in repeated random sampling from the same population and thus the sample-population relationship. That is, we cannot limit ourselves to the question of how we can generalize from the behavior of experimental subjects in a particular but potentially uninformative experiment to the would-be behavior of the parent population in this very experiment. Instead, we need to address the experiment-real-world relationship. Or using a well-known expression coined by SMITH (1982), we should exercise "control over subjects' preferences" and search for experimental designs which ensure that subjects' choices in the experiment reveal their "true" preferences. In the terminology of measurement theory we would say that, besides the uncertainty of the measurement due to sampling error (measurement precision/reliability; signal-to-noise ratio), we are now concerned with the accuracy of the measurement (measurement validity) and the question of whether the measurement instrument "experiment" yields a manifest variable (observed experimental behavior) that is informative regarding the latent variable of interest, i.e. people's true preferences. It should be noted that an experiment's measurement accuracy cannot be evaluated by statistical tools. It can only be evaluated based on the logical consistency and plausibility of the argument that is put forward in justification of the particular experimental design and/or in relation to a presumed standard of knowledge.
Control over subjects' preferences is crucial for the external validity of economic experiments irrespective of whether they are based on treatment comparisons or not. However, this aspect of external validity is often more salient in economic experiments that study only one treatment and do not aim for causal inferences through ceteris paribus treatment comparisons. While still relying on an experimenter's intervention, such experiments are focused on measuring latent preferences such as individual risk or social preferences. Prominent examples are experimental games such as prisoner's dilemmas, trust games, or public goods games that are implemented to find out, for instance, whether the choices made by individuals are in line with conventional rational choice predictions. 5 For example, one might deliberate how large the real payments (incentives) that are linked to subjects' abstract earnings in a dictator "game" would have to be to achieve a valid measurement in that these incentives make subjects reveal their true prosocial preferences. Another example is the attempt to avoid "experimenter demand effects" that often threaten external validity because subjects are usually aware of participating in an experiment and often inclined to please experimenters (DE QUIDT et al. 2018). When assessing the quality of the experimental control over subjects' preferences, one should be clear that this aspect of external validity has nothing to do with p-values. In other words, we may jointly have randomization and random sampling and control over subjects' preferences in an experiment. However, we may also have an experiment without randomized treatment comparison and without random recruitment, but with an attempted control over subjects' preferences. Imagine an incentivized dictator game carried out with a non-random convenience sample of students who happen to be in an experimenter's classroom on a particular Friday. In this case, all inductive inferencesbe they towards the experimental behavior of a broader population of students or other demographic groups, or towards the real-life behavior of the classroom students or broader populationsmust be based on scientific arguments beyond p-values. It would therefore be a gross abuse to use the term "statistical significance" for a purported corroboration of such inferences.
Control over the environment, in terms of shaping, knowing, and describing all behaviorally relevant factors besides the treatment of interest, generally decreases from lab experiments to field experiments, irrespective of whether they are based on treatment comparisons or not. Any taxonomic proposal that takes account of the diminishing control over the environment from the lab to the field is open to debate at least for non-randomized experiments. Attaching the label "experiment" to studies that rely on proper randomization to control for confounders is likely to cause little controversy even when they are carried out in the field where it is difficult to know, let alone fix all relevant factors besides the treatment.
In non-randomized designs, in contrast, the classification is likely to become controversial at some point; i.e. an arguable minimum level of control over the relevant environment would seem to be a prerequisite for calling a non-randomized approach an experiment. Irrespective of the label, we must take account of the specific research design when making inferences: (1) Causal inference must be based on scientific arguments but cannot be supported by a p-value when an experiment is not based on randomization. An important example are experimental within-subject designs. When causal inferences are based on doubtful claims of control over confounders, one should consider alternative experimental designs (e.g. randomized instead of non-randomized designs) or even a regression-based statistical control of observable confounders. 6 (2) Inference dealing with the sample-population relationship (generalization) must be based on scientific reasoning but cannot be supported by a p-value when there was no random sampling from a broader (numerically larger) population. This is the case, for example, when randomized experiments are carried out with subjects from non-random convenience sample. (3) Inference dealing with the experiment-real-world relationship and thus the question of whether experimental subjects reveal their "true" preferences in a particular experiment cannot be supported by a p-value at all. When the control over subjects' preferences is in question, one should avoid overhasty conclusions and check the robustness of results in replication studies with more valid experimental designspreferably in field experiments carried out with subjects from the relevant parent population and a manipulation of subjects' real-life environments.
Inferences in quasi-experiments
Often, non-randomized study designs focus on the behavioral outcomes induced by an intervention in one social group as opposed to another. Such designs are examples of "quasi-experiments" (CAMPBELL and STANLEY 1966) in which the ceteris paribus condition is in question. For illustration, imagine a dictator "game" in which a mixed-sex group of experimental subjects are used as first players who can decide which share of their initial endowment they give to a second player (one person acts as second player for the whole group). Additionally, assume that the experimental subjects are a convenience sample but not a random sample of a well-defined broader population. What kind of statistical inferences are possible? Neither one of the two chance mechanismsrandom sampling or randomizationapplies. Consequently, there is no role for the p-value: (i) Statistical inference towards a wider population beyond our experimental subjects is not possible because we are limited to a non-random sample. (ii) Statistical inference regarding causal relationships is not possible because there was no random assignment of subjects to treatments. Instead, one treatment was used to obtain a behavioral measurement in two predefined social groups. We should therefore simply describe, without reference to a p-value, the observed difference and the experimental conditionsor carry out a regression analysis to control for 6 There is no need to resort to regression when proper randomization ensures ex ante that confounders are statistically independent of treatments. In some cases, for instance when only a small experimental group is available (cf. footnote 2), switching to an ex-post control of confounders in a statistical model may be appropriate, however. It may therefore be useful to realize how, in the simplest case without confounders, a treatment-group comparison relates to a linear model where we regress the response to a binary treatment dummy and a constant. Generally speaking, the sampling distributions of estimated regression coefficients ̂ that link predictors to response are the distributions of the point estimates derived from a hypothetically repeated random sampling of the response variable at the fixed values of the predictors (RAMSEY and SCHAFER 2013: 184). Using a dummy regression (and a p-value based on the sampling distribution) instead of comparing two group averages (and a p-value based on the randomization distribution) can therefore be questioned on the grounds that it implies switching to a chance model that is at odds with the actually applied chance mechanism. There are specific constellations (equal variance in both groups or, alternatively, heteroscedasticity-robust regression standard errors) that lead to identical standard errors. However, group comparison and dummy regression only coincide as long as the former is based on the sampling-based approximation of the standard error of the randomization distribution. If the group comparison were based on the "true" standard error of the randomization distribution, we would obtain a lower standard error compared to which the standard error in the regression would be upwardly biased (ATHEY and IMBEN 2017). confounders if necessary; for example, the male subjects may be more or less wealthy than the female subjects which could be another explanation for the differences between the two groups.
Due to engrained disciplinary habits, researchers might be tempted to implement "statistical significance testing" routines in our dictator game example even though there is no chance model upon which to base statistical inference. While there is no random process, implementing a two-sample t-test might be the spontaneous reflex to find out whether there is a "statistically significant" difference between the two sexes. One should recognize, however, that doing so would require that some notion of a random mechanism is accepted. In our case, this would require imagining a randomization distribution that would have resulted if money amounts had been randomly assigned to sexes ("treatments"). Our question would be whether the money amounts transferred to the second player differed more between the sexes than what would be expected in the case of such a random assignment. We must realize, however, that there was no random assignment of subjects (with all their potentially confounding characteristics) to treatments, i.e. the sexes might not be independent of covariates. Therefore, the p-value based on a twosample t-test for a difference in mean does not address the question of whether the difference in the average transferred money amount is caused by the subjects' being male or female. That could be the case, but the difference could also be due to other reasons such as female subjects being less or more wealthy than male subjects. As stated above, it would therefore make sense to control for known confounders in a regression analysis ex postagain, without reference to a p-value as long as the experimental subjects have not been recruited through random sampling.
Conclusion
Systematizations of economic experiments have not predominantly addressed the inferences that can be made in different types of experimental designs. Usages of the term "experiment" range from a narrow view of "applying randomization" to identify causal effects, to a broad perspective of "trying something out" or measuring something. Our paper has shown that an adequate differentiation of experimental designs advances the understanding of what we can infer from different types of experimental studies. Several points should be kept in mind: first, a random process of data generationeither random assignment or random samplingis required for frequentist tools such as p-values to make any sense, however little it may be. Second, the informational content of p-values are different in randomizationbased inference as opposed to sampling-based inference. Randomization-based inference is concerned with internal validity and causality, whereas sampling-based inference is concerned with external validity in terms of generalizing from a sample to its parent population. Third, while being conceptually different, the sampling-based standard error used in a two-sample t-test can be used as an approximation in randomization-based inference. If one accepts the approximation, and if experimental subjects are recruited through random sampling, the resulting p-value can be used as an aid both for assessing internal validity and for generalizing to the parent population. However, if experimental subjects are not randomly recruited, statistical inferences must be limited to assess the causalities within the given study population. Forth, in the context of economic experiments, there are two essentially different meanings of the term "control" that must not be confused. In experiments aimed at identifying causal treatment effects, control means first of all ensuring ceteris paribus conditions (statistical independence of treatments). Besides that, the term "control" is concerned with external validity beyond the sample-population relationship. The expression "control over preferences" is used to indicate experimental designs in which a valid measurement is achieved in that experimental subjects can be believed to reveal their true real-world preferences. This design quality, which is crucial for making valid inferences, is part of scientific reasoning but cannot be aided by p-values. | 6,245.4 | 2020-02-18T00:00:00.000 | [
"Economics"
] |
Comprehensive study of multi-resource cloud simulation tools
This paper aims to explore Cloud simulation tools comprehensively. Specifically, it is to propose which simulator will fit in one’s preferences since each simulator has its purpose. Gathering data from research papers along with the simulation processes of four cloud simulators provides a comprehensive approach for identifying the parameters in percentage, characteristics and important features of each cloud simulator. Utilizing cloud simulation tools during testing and modeling the real cloud datacenters provide a test environment which gives a repeatable and controllable environment promptly. The said tools offer the possibility to determine quickly whether the wise guess is true or false. Possibly, the stakeholder can map according to the algorithm used, and give various workloads, tasks, the number of hosts, and virtual machines. Also, the inexpensive way to study how the real cloud datacenters work brings more flexibility and scalability. Cloud simulation tools should be the primary instrument for any cloud computing testing, modeling, and technique.
Introduction
*Cloud computing continues to stand among the delivery models in the 21st century. During its milestone beginning in 2009, the name cloud computing made itself a tag name as 'cool and fancy' in the field of business and IT industry. It aims to bring every business into something called the hype of modern business. However, despite its discovery, it seems there are still many issues raised by researchers, CEO's of the companies, representatives of small and medium business enterprise, and different sectors' representatives. Is cloud computing necessary or does it only create more confusions? The concept of cloud computing is the separation of the computing components that turns into essential services. These services are accessible via cloud servers such as software application, operating system, and hardware. In recent days, cloud computing overtakes the traditional model called Distributed computing because of its capabilities. It is a new paradigm where services in computing, as well as networking resources, are delivered and accessed over the internet. Adopting cloud computing needs a thorough decision-making strategy. It is like putting up a business project that requires a detailed plan and feasibility study. It needs to seek a clear understanding of the firm perspective like the rise and fall during its implementation, the cost of the project and how long it will stand. Likewise, adopting and building the cloud computing requires such knowledge.
Cloud computing offers the following services model: Software as a service (SaaS), Platform as a service (PaaS) and Infrastructure as a service (IaaS). In SaaS, the server connects to thin-clients where they can access the software. The purpose is to reduce software maintenance. Similarly, SaaS provides access to the software running on the server. The PaaS as a platform works at the lower level than SaaS. It provides an abstract level where software is developed and deployed over cloud server. While the IaaS serves a foundation of cloud computing where the resources allocate the storage capacity, processing time, processing power, networking and other services of the cloud server. These services bring a new horizon for the researchers to discover new solutions to the problems, new opportunities, and ideas to explore. Moreover, despite the feedbacks of the masses, there are still standard data which proves that almost of the companies are promoting cloud computing.
According to the survey, out of 1060 IT experts asked about their adoption of cloud infrastructure, 42% of the respondents spoke to ventures with more than 1,000 representatives. The margin of error is 3.07% (Weins, 2016). Additionally, 19% of the European Union (EU) enterprises used cloud computing in 2014. 46% of those organizations used innovative cloud services on financial and accounting software applications, consumer relationship management or for computing power to operate online business requests. The data indicate that more researchers will go through and explore this area for various studies (Giannakouris and Smihily, 2014).
Understanding more about cloud computing takes a lot of effort and money because of its limitations. However, different simulation tools are used to fill the gap. Cloud simulation tools are the leading tools for studying the behavior of real cloud infrastructure. The cloud simulations bring novel solutions in observing cloud itself. It models the real cloud scenarios such as for the creation of datacenter, host, virtual machines and scheduling policies. Thus, it makes cloud modeling easy and gives precise results.
This paper aims to study cloud simulation tools comprehensively. The primary involvement of this paper is to propose which simulator will fit in one's preferences since each simulator has its purpose. Section 2 converses the related work and section 3 describes the comprehensive study of four cloud simulation tools. Section 4 presents the comparative analysis, and section 5 elaborates the conclusion of this study.
Literature survey
Cloud computing aims to step up in the level which enables users to use applications and other services on rent. Accordingly, each of the services plays a significant role in computing, business, and IT industry. The cloud simulation tools model these services. The CloudSim simplified the processes of the real cloud. It is a tool for modeling and simulation of an extensible cloud. The CloudSim itself has an architecture resemblance to what exists in real cloud computing architecture. The lowest level consists of SimJava, which is responsible for the simulation frameworks such as queuing, processing of the events and the creation of system components like Datacenters, Broker, Host, Virtual machines (Vm), and other services. Next is the GridSim, which is responsible for the support of multiple Grid infrastructures like network devices, data sets, workload traces and information services. Then, there is layer located in the CloudSim which owns the major functionalities of the architecture by extending and coding it wisely. Moreover, the topmost layer is the user-code which is responsible for revealing the configuration for hosts, applications, virtual machines, users, application types, and Broker scheduling policy (Buyya et al., 2009). Thus, it models both the system and behavior of clouds. Also, it reveals the visual part of the system through which most of the users interact primarily in implementing policies and at the same time it provides an efficient way to distribute virtual machines within the network of the cloud. Besides, CloudSim modeling critically enhances the quality of service of an application under fluctuating resource and pattern of service request (Calheiros et al., 2010;Shaikh and Sasikumar, 2013). The CloudSim is also deployed and has shown how it works in NetBeans (Amipara, 2015). The comprehensive utilization of CloudSim gives basic classes and elements. The user can modify the case-specific activities according to their configurations. In particular, the user can change different intangible provisioning classes such as Vm Allocation Policy, Bandwidth Provisioner, Vm Scheduler, Cloudlet Scheduler, Power Vm Allocation Policy, and Memory Provisioner. Therefore, these classes can be modified as indicated by particular requirements for any application or research by defining the Abstract classes (Humane and Varshapriya, 2015). In contrast, CloudSim version 1.0 has identified some problems. First, the creations of some VMs are not possible due to the saturation resource in Datacenters and the Cloudlets assigned to this virtual machine may lose. Second, there is no link between datacenters. The relationship between datacenters is necessary. It will prompt to communicate and exchange any services and information on the load for a possible load balancing policy. Therefore, the CloudSim has introduced a new approach in extending its features. Also, the use of ring topology between datacenters is helpful. Likewise, Virtual machine creations within one or more datacenters are attainable through specified Broker policy (Belalem and Limam, 2011). Similarly, the CloudSim offers limited support to use resources due to the bottleneck, so the researcher introduced Cloud2sim as one among the variants (Kathiravelu and Veiga, 2014).
The cloud computing services have mainly segregated when it comes to infrastructure, platform, and software. The cloud service providers ensure the delivery to the customers. The payment will vary according to their request on per usage basis. Likewise, providers will do the management and update parts of the services availed by the client. Although it gives more advantages to the client to study how much money incurred per usage, this scenario is hard to meet in the real cloud. This reason made it for cloud simulators modeled purposely. The research concluded that no cloud simulation tool is better than each other because each of them has its advantages and disadvantages. Moreover, it varies according to the simulation requirements of the user (Kumar and Anjandeep, 2014).
In spite the popularity of cloud computing but still, most companies cannot avoid the bottlenecks. During giving up physical possession, reliability and security additional problems may still exist like the failure of monoculture, cloud provider trustworthiness, and staying in control (Schill, 2013). Then, FlexCloud introduced as a novel simulator that can test the performance of VMs within the premises of the datacenters. This cloud simulation tool is known as flexible and scalable in simulating resource scheduling in the cloud datacenters (Xu et al., 2015). Through the various algorithms present in FlexCloud, it can also simulate VM provisioning requests and performance assessment. It focuses on Infrastructure as a service (IaaS). Through its user-friendly interface, it is possible to repeat and customize the configurations. The VM migration is also possible to model. The FlexCloud has the capability to reduce the computing time and memory intake because it can handle largescale simulations. The unified feature of FlexCloud does make it support most of the public cloud providers by achieving energy-saving scheduling and balancing of workload (Ettikyala and Devi, 2015).
Today, the Cloud computing rapidly expanded its clients who have constrained the Cloud service provider to open more Datacenters for facilitating their administrations effectively. This growing demand has increased the energy consumption of most Cloud Datacenters. This high-power consumption will increase the effective cost and cut the limit of the income of most Cloud service providers. It also affects the surroundings by the emitted carbon. To make Cloud computing an ecofriendly, Green computing is the alternative way because it produces the solution for energy-efficient (Doraya, 2015). Additionally, due to some incapacity of most simulators, GreenCloud is developed. It supports Cloud communication because of its nature as a packet network simulator. The various communication and computing resources such as a server, router, switches and other links will collect energy consumption. It evaluates workload distributions.
Through which the power consumption will dramatically cut by consolidating workload with Datacenter virtualization. The various power management layouts such as voltage scaling and high-powered shut down for computing and network components can prove through the results during simulations acquired in different tiers (Kliazovich et al., 2010). However, GreenCloud simulator escalates time of simulation, and it requires a larger amount of memory, which makes it ideally suited for small Datacenters. Moreover, GreenCloud still has intricate patterns of power consumption, even as it provides a standard set of policy (Ettikyala and Devi, 2015).
According to recent surveys, more than fifty percent (50%) of the organizations and companies, mainly medium and large size have already migrated to the Cloud computing. Indeed, even nowadays the resources provided as pay-per-use is widely accepted in IT industry but still, cloud computing faces various challenges. These challenges are automated provisioning services, VM migration, consolidating servers, managing energy consumption, analytics on traffic data, security of data, and software infrastructures which need sufficient volume of research to become stable. The implementation of the research in real Cloud faces difficulties due to the expensive costs during the setting up of a cloud environment. The iCanCloud simulator is SimCan based, which can put on to specific hardware. The iCanCloud can determine the trade-off between costs incurred versus the rate of the performance during simulations. This advantage may give an idea to the stakeholder about the amount costs during the process. The pay-per-use model of this simulator makes it more perspectivewise during implementation. It also handles parallel setup execution over machines. The iCanCloud claims that two biggest features are still in development (power consumption and parallel experiments) and CloudSim already would have had a comprehensive set of different extensions, which vastly enhance its use. Additionally, the development and deployment of iCanCloud are also due to the features' constraints of the CloudSim and GrenCloud during simulation process (Suryateja, 2016).
Cloud simulation tools
The simulator is a prototype that imitates the operation of a real-world process or any system over a period. The process is called simulation. The Cloud Simulation provides an environment to study the real scenario of the modeled system, in which stakeholder will obtain the behavior of some entity or phenomenon.
CloudSim
CloudSim is a tool or non-volatile resource for modeling and simulation of the Cloud scenario. It is a Java-based toolkit that will ensure the creations of the following: (1) Datacenter or many hosts of computers for remote storage, processing or distribution of large amounts of data, (2) Host or Virtual servers, and (3) Virtual Machines Scheduler. The CloudSim evolves from different versions starting from the latest version down to old version are CloudSim 4.0, CloudSim 3.0.3, CloudSim 3.0.2, CloudSim 3.0.1, CloudSim 3.0, CloudSim 2.1, CloudSim 1.0. The CloudSim has a major bottleneck of lack of graphical user interface and report-wise, so several variants have developed such as iFogSim, CloudSimEx, WorkflowSim, Cloud2Sim, SimpleWorklow, DynamicCloudSim, RealCloudSim, CloudReports, CloudAuction, CloudMIG Xpress and CloudAnalyst. These variants have their specific tasks (Ashalatha, 2016). Thus, it is easy for the researcher to pick up one of these variants and implement Cloud scenarios. Fig. 1 shows the simulation process between each of the parameters. The datacenter models the hardware infrastructure. In CloudSim, a datacenter is a class form of codes where the creation of the host is possible. The Host is a node of physical machines that can manage virtual machines and instantiate virtual machine scheduler. The Virtual machine scheduler allocates the process in every virtual machine according to the scheduling policy. Fig. 2 shows the creation of Datacenter (Datacenter_0) along with a Host (Host #0) two Cloudlets namely Cloudlet_0 and Cloudlet_1 running inside Vm_0 and Vm_1 respectively. Both Cloudlets have different MIPS running in two virtual machines.
The two Cloudlets have the same finishing time of the task. The time varies depending on the requested VM performance. Table 1 demonstrates the distinct upgrades of each of the versions beginning from CloudSim version 3.0 up to version 4.0. The two checks [] suggest the New features.
FlexCloud
A Java-based cloud computing simulator claims to be a flexible and scalable simulator. The FlexCloud simply gives simple steps in the execution of the resource scheduling, simulates the process on how to initialize the cloud datacenter, allocates the virtual machine request, and provides performance evaluation for various scheduling algorithms. Also, it has a user-friendly graphical user interface in which user can configure small and large scale simulations by allocating the time and memory depending on the request services. Thus, is suitable for evaluating the Cloud Computing Infrastructure as a Service. The following are the essential features and advantages of FlexCloud: Built on the Java platform which runs on a single computer with JVM It focuses on IaaS where pattern design is flexible and extendable; It has a feature to add new scheduling algorithms; It has a user-friendly interface and configurations can be customized to simulate various conditions; The evaluation of the performance of the different scheduling algorithms produces simple diagrams. The computing time and memory consumption to support large-scale simulations have more benefits compared to CloudSim (Xu et al., 2015); The best software tool presented in Beijing Tongtech Software Innovation Contest. Fig. 3 shows the simulation process of FlexCloud. This picture gives five distinct scenarios: (1) Client resource request during the initial stage then (2) Client can choose and allocate the suitable resource (3) Feedback to the user (4) Scheduling task (5) Update and Optimization. All these steps can be done by the client and within the FlexCloud Scheduler Center. Fig. 6 shows the summary results in a diagram form. It displays the values of indices in y-axis and algorithm names in the x-axis. The red corresponds to the Online Random Algorithm. While the blue represents, Online Round-Robin Algorithm and the green is for LS algorithm. It shows the load stability for different algorithms. The figure also proves that under this simulator, the LS algorithm can overcome other two algorithms.
GreenCloud
GreenCloud is a classy simulator which focuses on cloud communications regarding the packet to packet simulation for energy-aware on the Datacenter. It is a sophisticated modeling simulator for energy used by the Datacenter, such as computing servers, network switches, and communication links. Focus on energy awareness in every network devices It simulates Cloud networks It can do the following: CPU simulation, memory simulation, storage simulation and networking resources simulation. The prototype energies are self-regulating for each type of resources It supports the virtualization and virtual machine migration The allocation of network-aware resource The complete implementation TCP/IP protocols It is an open-source and user-friendly interface. Fig. 7 shows the three layers of network namely Core, Aggregation and Access. The Core is the central part of a system where the datacenter locates. The Core provides various services to the customers when they connect to the Access network. The Aggregation is a network of combining (aggregating) multiple network connections in parallel. The purpose is to increase throughput for every single connection. When the links fail, it helps to identify the redundancy quickly. On the other hand, the Access is a network which connects the individual customer to their direct service provider. GreenCloud provides energy model on every switch or any device plugged into each network. Fig. 8 shows the simulation of the three tier Datacenter architecture along with the creation of switches namely Core, Aggregation and accessing 144 servers. The main parts of the simulation are building topology, creating Cloud users, showing simulation parameters, displaying simulation reports and making graphs. The current simulation has a datacenter capacity of 576057600 MIPS. Fig. 9 shows the data summary of simulation. The pie graph shows the total energy consumed which is 301.9 W*h. Each layer has consumed different energies. The Green represents the Server energy has consumed 138.6 W*h (46%). The aqua blue labeled as Switch energy (aggregation) has consumed 102.8 W*h (34%). The Yellow infers as the Switch energy (core) has consumed 51.4 W*h (17%), and the Red infers that the Switch Energy (access) consumed 9.1 W*h (3%).
iCanCloud
iCanCloud is a simulation framework based on OMNeT++ and INET frameworks. Thus, both structures are required to execute and develop new components for the said simulator. The forecast of trade-offs between amount incurred and the rate of the performance during simulation is the primary purpose of the iCanCloud. Furthermore, it shows the trade-offs of a particular set of running applications to the users. In detailed, a given performance will produce how much cost during simulation. Additionally, iCanCloud can be used by a variety of users, from inexperienced users to developers of large computing applications. Even though each user overlapped on several features provided by the Cloud but all of them have the same objective such as improving the trade-off between cost and performance which are the difficult task iCanCloud tries to lessen. Thus, this simulation platform claims to be a scalable, flexible, fast and easy-to-use tool, which let users, obtain results quickly to help them in taking a decision for paying a corresponding budget of machines. The most noteworthy features of the iCanCloud simulation tools include the following: Simulation of both existing and non-existing cloud computing design structures. The hypervisor module is versatile, and it provides easy steps for incorporating and studying of Cloud brokering policies both new and existent. The custom-made VM is helpful to execute unicore/multi-core environment. It supports an extensive variety of requirements for memory systems, which comprise the copies for the local memory system, system remote storage (NFS), and parallel system storage, such as system parallel file and RAID systems. The GUI is comprehensible. It simplifies the simulation and customizable to huge thin models. The GUI is valuable for handling storage of the preset VMs. It is intended for the storage of preset Cloud systems, managing a repository of preconfigured tests, sending tests from the GUI, and producing graphical reports. It offers a POSIX-based API and an improved MPI library for application modeling and simulation. It traces the physical requests; it utilizes a state graph, and through processing platform, it instantly designs the latest applications.
In the repository of the iCanCloud, it can add new modules to intensify the service of the processing environment (Castane et al., 2011). Fig. 10 shows the different layers of iCanCloud. The Virtual machines repository, Application repository, Cloud Hypervisor, and Cloud System. Each of the layers has different sub-tasks. The VMs repository is responsible for user-defined instances, and it models existing VMs which is the Amazon. The underlying system's API contains a set of system calls which is capable to directly communicate to the hardware models like storage system, CPU system, memory system, and network system. The Application repository has components of phobos, User-defined application, and map-reduce. These are predefined applications configured by the user. The Cloud hypervisor or sometimes called Cloud Broker which is responsible for handling jobs, scheduling policies and cost policies. The Cloud system represents the architecture of cloud and the deployment of VMs. Fig. 11 shows the configuration process. Starting from the user interface will do the following steps: generate Cloud model definition, generate user configuration, initializing the phase, the creation of specified cloud models, execute the simulation, load users/jobs and lastly create a report. Fig. 12 shows the creation of Cloud A with SmallCluster Datacenter and the user's CPU has allocated to 1000. Moreover, the different results are displayed such as Aggregated Energy (I) of each node, the Power (W) of each node and lastly Energy versus Power. Table 2 shows the comparison of Cloud simulation tools. The values of the parameters vary according to the following: Available/Yes/Supported, Limited/Work in progress and none if not supported. The seconds and minutes are the time taken by the simulator during the simulation process. During simulations of the following simulators, the CloudSim, FlexCloud, and iCanCloud got seconds except for GreenCloud. Among eighteen parameters, the four simulators are fully-supported in Operating System and Purpose. All the four simulators are open-source. Both CloudSim and FlexCloud, the implementations are through Java while the implementation of both iCanCloud and GreenCloud are in C++. The iCanCloud supports only parallel experiments, but regarding power consumption models it is still working in progress. Table 3 shows the equivalent values in numbers (1, 0.5, and 0). The Cloud simulation tools that have the values Available/Yes/Supported/Seconds have assigned a value of 1 while the Cloud simulation tools that have the values Limited/Work in progress/Minutes have assigned a value of 0.5. Moreover, the Cloud simulation tools that have the values none available have assigned a value of 0. According to the table, seven parameters are the values having 1 in each of the Cloud simulation tools. These parameters are namely Availability, Programming language, Simulator type, Platform, Advantages, Operating system support, and Purpose. The FlexCloud, GreenCloud, and iCanCloud garnered a total of 13.5 points while only CloudSim garnered a total of 11 pts. This table predicts which simulator has more composition of parameters.
Comparative analysis
As the computation results, Fig. 13 shows the composition of parameters present in each Cloud simulation tools in percentage. The three simulators accumulated the percentage of 67.5, GreenCloud, FlexCloud, and iCanCloud. The CloudSim accumulated 55 percent.
Conclusion
The study and evaluation of real Cloud framework are not feasible. Numerous factors are affecting why real Cloud is not always possible to implement for study. These are high infrastructure cost regarding procurement, high energy cost during implementation, limited accessibility of Cloud resources in geographical location, Cloud service provider will prohibit the data accessibility for study because of data privacy and confidentiality, infrastructure failure during implementation and repetitive implementation is more likely not possible. In any case, to make it feasible, the Cloud simulation tools are now filling these gaps. This paper has tried to discuss the four standard and modern tools to model and study the real Cloud. The paper has also described the simulation models, features, and simulation results of each of the simulation tools. In this paper, simulation tools have been discussed to provide clarity and definition, especially in the specific areas. In particular, it has deliberately shown that the Cloud simulation tools processes and architectures are much similar to the real Cloud infrastructures.
In conclusion, the principal source for the modeling of real Cloud through the Cloud simulation tools should be one of the main philosophies. Way back in the past, this has not been the principal source in studying and evaluating real Cloud infrastructure. Cloud Simulation tools should be the primary instruments for any Cloud testing and modeling. Just identify the requirements needed for simulations and then the user can choose among the four simulation tools according to their desired outputs. Through these simulation tools, improvement Cloud scenarios are possible because most of these tools are extendable, scalable, flexible, fast, open source, user-friendly, and result-oriented. | 5,870.2 | 2017-07-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
The Kitaev honeycomb model on surfaces of genus $g \geq 2$
We present a construction of the Kitaev honeycomb lattice model on an arbitrary higher genus surface. We first generalize the exact solution of the model based on the Jordan-Wigner fermionization to a surface with genus $g = 2$, and then use this as a basic module to extend the solution to lattices of arbitrary genus. We demonstrate our method by calculating the ground states of the model in both the Abelian doubled $\mathbb{Z}_2$ phase and the non-Abelian Ising topological phase on lattices with the genus up to $g = 6$. We verify the expected ground state degeneracy of the system in both topological phases and further illuminate the role of fermionic parity in the Abelian phase.
Introduction
The Kitaev honeycomb model is an example of an exactly solvable two-dimensional model that exhibits both Abelian and non-Abelian topological phases [1]. The Abelian phase, which is also known as the toric code [2], provides a realization of a topological quantum field theory known as doubled- 2 theory. The non-Abelian phase is effectively described by the Ising topological quantum field theory [3]. The main attribute of topological field theories is a dependence of the dimension of the relevant Hilbert space on the topology of the underlying manifold on which these theories are realized. For example the doubled- 2 theory is represented in the twodimensional toric code by a non-degenerate ground state on a genus 0 surface like an infinite plane or a sphere, and a four-fold degenerate ground state on a genus 1 surface like a torus. Similarly, the Ising topological field theory is linked to a three-fold degenerate ground state of the honeycomb lattice model in its non-Abelian phase on a torus. However, the square lattice of the toric code and the honeycomb lattice of the Kitaev model permit realizations of only these two surface topologies as the Euler characteristics of both lattices are zero.
We extend the solution of the Kitaev honeycomb model to closed surfaces of genus greater than one. We will rely on the exact solution of the model based on Jordan-Wigner fermionization [4][5][6]. This solution allows us to factorize the model into a fermionic superconductor on a topological doubled- 2 square lattice background or vacuum state. In order to generalize this to higher genus surfaces, we introduce a lattice which can be realized on such surfaces and accordingly adjust any definitions and relations to this context. We will then demonstrate the generalized solution on a number of different surfaces of genus greater than 1 by calculating the ground state degeneracy of the model in both the Abelian and non-Abelian phases. In this context, we also investigate additional features of these topological states that are intrinsic to their lattice realizations.
A natural framework for our investigation of two-dimensional lattice models whose topological phases effectively realize certain topological quantum field theories is the axiomatic definition of these theories. n-dimensional topological quantum field theory is defined as a functor from a category of n-cobordisms to a category of vector spaces [7,8] -( ) F n : Cob Vect. 1 is subject to certain axioms which for example ensure that vector spaces originating from topologically equivalent manifolds are isomorphic and that the disjoint union of (n−1)-manifold carries over to a tensor product between vector spaces. The functor satisfying these axioms is called modular and the underlying categories are called monoidal. We point out that realizing a topological phase of a physical system on a closed oriented surface of some genus represents a realization of an important part of this functor. Specifically it assigns to the surface (2-manifold) a vector space spanned by the ground states of the relevant physical system. We first give a concise overview of the model and its effective spin/hardcore-boson representation on a square lattice in section 2. A realization of lattices on higher genus surfaces is introduced in section 3, which is then followed by the implementation of the model on these lattices and its solution using the Jordan-Wigner fermionization in its effective spin/hardcore-boson representation in section 4. The last two sections describe the calculation of the ground state in section 5 and evaluation of the ground state degeneracy of the model on surfaces with the genus 2-6 in section 6.
The model
The Kitaev model [1] is a honeycomb lattice with a spin 1 2 particle attached to each vertex. Each spin interacts only with its nearest neighbors via an interaction term that depends on the orientation of the link (x, y or z) connecting them. Explicitly, if i and j label neighboring vertices connected by a link of orientation α, these spins interact via a term of the form s s We can also add a time-reversal and parity-breaking potential to this Hamiltonian which comes from third order perturbation theory of a weak magnetic field. The effective potential is k = å V V p p where κ is a coupling constant and the sum is over the plaquettes of the system with each hexagonal plaquette making the following contribution to the potential: where the sites of the plaquette p have been numbered as in figure 1. Hence, the full Hamiltonian of the model is We can define a vortex operator W p for each plaquette p of the lattice. If we number the sites of the plaquette p as in figure 1 The vortex operators W p commute mutually and with the full Hamiltonian, including the time-reversal and parity-breaking potential terms. Consequently the Hilbert space can be written as w p is the common eigenspace of the W p operators corresponding to the particular configuration of eigenvalues {w p } where w p =±1. We say that a vortex occupies the plaquette p if w p =−1 [1,9].
Kitaev [1] solved the system by a reduction to free fermions in a static 2 gauge field. He showed that the model exhibits four distinct topological phases including three Abelian toric code phases A x , A y , A z , satisfying Figure 1. The Kitaev honeycomb lattice model and its phase diagram. As shown on the left, any given link has one of three possible orientations, x, y or z and we number the vertices of a plaquette 1-6. The phase diagram can be thought of as the convex hull of the three points (J x , J y , J z )=(1, 0, 0), (0, 1, 0), (0, 0, 1).
:
, : , : , 6 x y z x y x z y z x y z and an additional phase B which occurs when all three inequalities above are not satisfied simultaneously. In the absence of the magnetic field the B phase is gapless, but in the presence of a magnetic field it acquires a gap and becomes the non-Abelian Ising phase. Its quasi-particle excitations, which in our representation are formed by Majorana fermions attached to vortices, show non-Abelian fractional statistics and are known as Ising anyons. As described in [6], the model can be mapped onto a square lattice whose vertices carry effective spins and hardcore bosons. In this representation, the Hamiltonian of the model (2) acquires the following form x q q q q n x q n q nx y q q z q q q n y q n q ny where the t a q are the Pauli operators for the effective spin at a site q and † b q and b q are creation and annihilation operators for hardcore bosons. The sums in the Hamiltonian are over all the sites of the lattice. The contribution to the potential V of a plaquette P has the following form in this representation The vortex operator W P for each plaquette P of the lattice is now defined as t t t t = -- q q q is the boson number operator. We say P is occupied by a vortex if the eigenvalue of the corresponding vortex operator is −1 and is empty otherwise.
Lattices on higher genus surfaces
We will now discuss the construction of lattices on closed surfaces with different topologies. To define the model on a closed surface of a higher genus, g>1, we necessarily have to consider a lattice with an Euler characteristic χ that is negative due to the relation c =g 2 2 . A perfect square lattice with the number of vertices V=N, the number of plaquettes or faces F=N and the number of edges = E N 2 permits at most a closed surface of genus g=1 as its Euler characteristic χ=V−E+F is zero. To construct a lattice with negative Euler characteristic from a square lattice requires alterations of some of its vertices or plaquettes. For example, they may include increasing or decreasing the number of edges connected to some vertices or changing the number of edges associated with some plaquettes. We refer to such alterations as defects and emphasize that these are local lattice defects as opposed to non-local defects such as lines of dislocations [10,11,12]. We can think of these defects as particles, called genons [13,14].
We first construct a lattice with g=2 before considering lattices of higher genus. We start with an octagonal piece of square lattice and identify or glue its opposing boundaries together in a way similar to creating a torus by identifying opposite boundaries of a rectangle. The construction is illustrated in figure 2. If we tessellate an octagon with a square lattice and identify the sites residing on the boundary as indicated in figure 3, the resultant lattice will be of genus g=2. We now have a defect plaquette with 12 edges centered around the corners of the original octagon, which are all identified once the boundary edges are glued together. Clearly we could tessellate an octagon with a variety of square lattices of different sizes. The particular lattice we use is characterized by three numbers {N a , N b , N c } which specify the number of vertices along the vertical, diagonal and horizontal edges respectively as shown in figure 3. The total number of vertices on such a lattice with dimensions c a c tot . We can calculate the Euler characteristic by noticing that there are exactly 2N tot edges and N tot −2 plaquettes including the defect plaquette. Hence we have c = - as desired. We note for completeness that there are other ways of gluing the edges of an octagon together in order to produce a g=2 surface but these may lead to the emergence of undesired line defects. Our approach described above avoids this issue.
Alternatively, we could consider a similar construction using the dual lattice. While the original lattice has each vertex four-valent and all plaquettes are square except the defect plaquette, the dual lattice has all plaquettes square and all vertices four-valent except one defect vertex which is twelve-valent. However, this would require changes of the Hamiltonian of the model. We therefore prefer to work with the original lattice which preserves the form of the Hamiltonian. We will, nevertheless, need to define the vortex operator for the defect plaquette and its magnetic contribution.
We now consider the construction of lattices on surfaces with genus g>2. One approach to generalize the construction developed for the g=2 surfaces above would be to start with a polygon with a greater number of sides (e.g. dodecagon for a g=3 surface) and then glue the opposite sides accordingly. Here we prefer a different and more modular approach which lends itself more naturally to a numerical implementation.
Consider the octagonal piece of lattice as described above. Once all but two of the edges have been glued together, we are left with a lattice with the topology of a torus with two punctures in it. We now use this as a building block for constructing lattices with higher genus. Consider g−1 copies of a torus with two punctures. We can always glue the punctures together in such a way that results in a closed connected surface of genus g. With regards to the lattice, we start with g−1 copies of the octagonal piece of square lattice described above and stitch them together to form a chain of octagons as depicted in figure 4. We now form a lattice on a surface of the desired topology by identifying the remaining opposing edges of each octagon as well as by gluing together the remaining edges of the first and the last octagon of the chain. The resultant lattice will have g−1 defect plaquettes, identical to the one described above, located where two octagons are joined together.
We now verify the Euler characteristic for our higher genus lattices. If each octagonal piece has dimensions N a , N b and N c then the total number of vertices on this lattice is (g−1)N tot . We can still uniquely associate every vertex to two edges so the lattice has 2(g−1)N tot edges. To write down the number of plaquettes as a function of the lattice The lattices we will be considering on genus 2 surfaces will tessellate an octagon as depicted on the left. They are characterized by three numbers N N , a b and N c . N a is the number of links crossing the vertical (green) edge of the octagon. N b is the number of sites living on a diagonal (blue or red) edge. N c is the number of links crossing the horizontal (purple) edge of the octagon. When the edges have been identified appropriately, the links colored red in the center image form a closed chain depicted in the image on the right. The corners of the octagon all meet at a common point represented by the black dot at the center of the right image. The corresponding plaquette, centered around this point, will have 12 edges and we will refer to it as the defect plaquette. dimensions {N a , N b , N c }, we can associate every vertex to the upper right hand plaquette it forms a corner of. Every square plaquette will be assigned a unique vertex while the defect plaquettes will be assigned three. So the number of plaquettes on the lattice is equal to the number of vertices minus 2 for every defect:
The model on surfaces of genus g2
We now consider the model on the lattices constructed in the last section. We first write down and discuss the Hamiltonian for the system and its symmetries in the effective spin/hardcore-boson representation of the model. We then fermionize the bosons to obtain a Hamiltonian quadratic in fermionic operators. Since the lattices we will be considering do not have any translational symmetries, we will not be able to write down the ground state in closed form as was done in [6] for the model on a torus. However, the formalism allows one to efficiently diagonalise the Hamiltonian numerically within any particular common eigen-subspace of the models symmetries.
The Hamiltonian in the effective spin/hardcore-boson representation on the lattice described above is of the same form as that on a lattice without defects (2). In both cases, every vertex is four-valent with two horizontal (x-links) and two vertical (y-links) edges attached. If we denote a site of the lattice by q, then by q+n x we denote the neighbor to the right of q that is connected to it by an x-link. Similarly, we use the notation q+n y to denote the neighbor above q that is connected to it by a y-link. The bare Hamiltonian can then be written as follows: Regarding the potential k = å V V P P , the contribution from the square plaquettes are still given by the expression (8). On the other hand, the contribution of the defect plaquettes to the potential are more complicated and are given by This expression follows from translating the three-body spin terms, linked at the third order of perturbation theory to the weak magnetic field, into the effective spin/hardcore-boson representation. In the original honeycomb picture, the defect corresponds to a plaquette with eighteen edges and the contribution to the potential by a defect plaquette is the following sum of three-body spin terms: where the sites of the plaquette are numbered as depicted in figure 5.
We can still define a vortex operator which commutes with the full Hamiltonian k = + å H H V P P 0 for every plaquette. For square plaquettes the vortex operator is defined as in equation (9). For the defect plaquettes however we define the vortex operator as follows t In addition to the vortex operators, we also define an operator, which commutes with the Hamiltonian, for every generator in a basis for the 1st 2 -homology group H 1 of the lattice. We will call these operators loop operators and to define them we will need to choose a basis for H 1 and a particular representative from each homology class in that basis. For a lattice of genus g2 the rank of H 1 is g 2 so we will need to choose g 2 homologically distinct cycles. We will choose the cycles depicted in figure 6 and their associated homology classes as the representatives and basis respectively. As depicted, for a lattice with g−1 copies of an octagonal piece of lattice we will choose three cycles on the first copy, two cycles on every other copy (reflecting the fact that every additional copy increases the genus by 1 and the rank of H 1 by 2) and one horizontal cycle that spans each octagon. The loop operators we will define for these cycles will act on the sites of the lattice that are connected to the links that constitute the cycles. How a loop operator acts on a particular site is determined by the way the associated cycle passes through it. There are six ways a cycle can pass through a site as depicted in figure 7 and we will associate a single site operator with each of them as follows: . 15 vertical part and two corners and so the loop operator for this cycle can be written as follows:
t t t t t t t t t t t
Corner 1
Horizontal
Corner 4
Vertical
One can in principle define a different set of loop operators that commute with the Hamiltonian but these will in general be equivalent to a product of the loop operators already defined times a product of vortex operators [9]. The vortex and loop operators form a set of commuting observables, allowing us to decompose the Hilbert space as follows: subspaces where it can be expressed as a combination of terms that are quadratic in fermionic operators. The restricted Hamiltonian can then be diagonalized by an appropriate Bogoliubov transformation. We now change to a basis of the Hilbert space which reflects the decomposition (17). It seems natural to consider the common eigenvectors of the vortex and loop operators along with the eigenstates of the boson number operator ( = † N b b q q q ) for each site q of the square lattice. However, the basis so defined would be overcomplete. If there are -( ) g N 1 tot sites in the lattice, the model clearly has distinct combinations of eigenvalues a common eigenvector of this set of observables might have. However, the vortex and number operators are not completely independent operators as they satisfy two conditions.
The first condition is the fact that the product of all vortex operators is equivalent to the identity operator. That is, where the product is over all the plaquettes of the lattice. Since a product of vortex operators can be thought of as counting the parity of vortices occupying the associated plaquettes, this essentially means there can only be an even number of vortices in total in the model. So the number of independent vortex operators is and hence the number of configurations of vortices in the model is - The second condition is a relation between the parity of bosons in the system and a certain product of vortex operators. For a lattice where the numbers N a and N b are both even, we can consider a set of plaquettes forming a checker board pattern as depicted in the top left image of figure 8 by the colored squares. It is easy to check that since the Pauli operators square to the identity, the product of the vortex operators associated with the colored (or uncolored) plaquettes is equivalent to the boson parity operator where q runs over all the sites of the lattice. In other words, the parity of the number of bosons must be the same as the parity of the number of vortices on colored plaquettes (or equivalently uncolored plaquettes). Since the parity of bosons is fixed to be 1 or −1 depending on the configuration of vortices, the number of independent boson number operators N q is -- tot and hence the number of configurations of bosons in the model is - tot . For lattices where N a or N b are odd numbers, there is a similar dependence of the boson parity on the configuration of vortices in the system. For such lattices, we cannot color the plaquettes with a perfect checker board pattern but we can consider sets of plaquettes as depicted in figure 8, such that the checker board pattern is misaligned along a 1-cycle of links that separate plaquettes of the same color. The exact pattern we choose for coloring in plaquettes and the associated cycle along which the checker board pattern is misaligned depends on the parity of the numbers N a , N b and g for the lattice and is described in figure 8. If we compose the corresponding vortex operators, the Pauli operators for sites away from this cycle will cancel out as they did before but along the cycle, the resultant operator will act with a string of Pauli operators and may not act with the parity operator -( ) N 1 2 q for some sites. However, we can cancel the action of these Pauli operators, and replace any missing single site parity operators we need to obtain the full boson parity operator, by composing this product of vortex operators with a product of loop operators that act on the sites connected to the links of the cycle. The desired product of loop operators that act on the sites connected to the links of the cycle are shown in figure 8. In general, the boson parity operator can be written as The next step of the solution is to use a 'Jordan-Wigner' type transformation to fermionize the bosons of the model. This should result in a Hamiltonian which is quadratic in fermionic operators which we will then be able to solve using the Bogoliubov-de Gennes (BdG) technique. To fermionize the bosons, we will define a Jordan-Wigner type string operator S q for each site q of the lattice. The composition of these string operators with the boson creation and annihilation operators will be fermionic creation and annihilation operators. Expressing the Hamiltonian and other observables in terms of these new operators will effectively transform the hardcore bosons of the model into fermions.
To define a string operator for a site q of the lattice we consider the following: if we had a particle located at the reference site (as in figure 9(a)) we can always move that particle to any site q by first moving it to the right an appropriate number of sites and then up an appropriate number of sites. Even the sites below the level of the reference site can be reached in this way by making use of the boundary conditions as shown in figure 9(b). We can associate a single site operator for every site traversed in the path just described connecting the reference site to the site q. The string operator for q, denoted S q , will be defined as the composition of these operators. To every site i crossed by the horizontal part of the path we associate the operator t --( ) N 1 i i x , to the corner of the path we associate the operator t i y , to every site i crossed by the vertical part of the path we associate the operator t i x and to the last site of the path we associate the operator t i y . Since each of these operators act on different sites, they all commute with each other and so we are free to define S q as the composition of these operators without worrying about the order of composition. If we let q x denote the number of sites that need to be traversed in the horizontal part of the path with the site at the corner and q y the number of sites that need to be traversed in the vertical part of the path with the site at the end, then we can number the sites of the path from 1 to q x +q y , beginning at the reference site and ending at q and we can write the string operator for q as follows: If we consider two string operators S q and ¢ S q such that ¹ ¢ q q there will be a single site, shared by the paths defining the string operators, where the action of S q anti-commutes with the action of ¢ S q . It follows that composing the string operator S q with the bosonic creation and annihilation operators for the site q defines fermionic creation and annihilation operators for q which we denote by † c q and c q .
, , 0, , 0. 23 Expressing the basic Hamiltonian in terms of these fermionic creation and annihilation operators yields the following sum of quadratic fermionic terms, Noting that both the ¢ X q q , and ¢ Y q q , operators, being products of string operators, act on a closed loop of sites, we can associate a 1-cycle with each of the ¢ X q q , and ¢ Y q q , operators, namely the set of links joining the sites being acted on. These operators will always be equivalent to a product of loop operators, which is determined by the homology class of this cycle, and a product of vortex operators which is determined by a certain 2-chain related to the homology class of the cycle and the representatives of the homology classes we have chosen as a basis for H 1 . Recall that each loop operator is associated to a non-trivial cycle, the collection of which represent the generators of H 1 . So whatever the homology class may be for the cycle a associated with an X or Y operator, we can always create a unique cycle b which will be homologous to a by adding some combination of the cycles associated with the loop operators. A particular X or Y operator is proportional to the product of the loop operators corresponding to the cycles used in the combination forming b.
There will also be a 2-chain, which we denote by ς, which will have a+b as a boundary. A particular X or Y operator is also proportional to a product of the vortex operators associated with the plaquettes which constitute ς. We note that while such a 2-chain ς is not unique, the operator obtained by multiplying the vortex operators associated with the plaquettes of the 2-chain ς is unique. For example, if we cut out a cylinder with boundaries a and b from a torus, the product of the vortex operators inside of the cylinder is the same as in its complement. This follows from the fact that vortex operators square to the identity and the relation (18). In general, when expressed in terms of loop and vortex operators, the X and Y operators are of the same form. We will use a notation to reflect this by letting Z q denote X q if q is a x-link and Y q if q is a y-link. Explicitly we have 1 is the homology class of the cycle associated with the link q described above. So the Hamiltonian can be written as follows tot tot matrices ξ and Δ are given by Here, d ¢ Regarding the potential, when expressed in terms of the fermionic creation and annihilation operators, each term appearing in the sum defining the contribution from a plaquette inherits a product of string operators similar to the X and Y operators. The potential also becomes quadratic in fermionic operators and can be written as To be able to calculate the ground state energy numerically for a particular vortex/homology sector, we need to understand how the matrix T represents the state fñ | and how it can tell us the parity of the of the number of occupied c-fermions modes it has. In general, T will be of the following form: tot tot matrices which, since T must be unitary, must satisfy Bloch and Messiah were able to show that a unitary matrix of the form (41) can be decomposed as follows [15,16] where the -´- tot tot matrices D and C are unitary and bothŪ andV are real matrices of the following block diagonal form: odd in half of the homology sectors and even in the other half. This leads to a splitting in the energy between fermionic ground states in half of the homology sectors from the other half resulting in the degree of degeneracy d=8. In figure 10(b), we see that the system with odd dimensions N a and N c has half of its homology sectors forming the ground state in the Abelian phase while the other half are excited states. As the system approaches the phase transition, the sectors forming the ground state begin to split with two of them becoming excited in the non-Abelian phase while four of the excited sectors drop in energy to join the remaining six non-excited sectors to form the ten-fold degenerate ground state in the non-Abelian phase. Due to finite size effects, there is a small splitting in the energy between the degenerate homology sectors that form the ground state. We expect this spitting to vanish in the thermodynamic limit. We measure this splitting by the difference in energy between the sector with the highest energy and the sector with the lowest energy. In figure 11 we plot the splitting between the the degenerate states as a function of N=N a =N b =N c for the two Abelian cases (even and odd sizes) and the non-Abelian case. As shown in the figure, we find the splitting between the sectors forming the ground state approaches zero exponentially as N grows. This calculation was done with κ=0.2 in each case and with J=0.1 for both of the Abelian cases and J=1 for the non-Abelian case. Figure 10. In (a) we show the difference between E min and the energy E of fermionic ground states and first excited states in each homology sector as a function of J x =J y =J for N a =N b =N c =4 and κ=0.2 on a genus 2 lattice. In (b) the same energy difference is plotted for N a =N b =N c =5. The number of degenerate ground states is included just above the lowest curves in both the Abelian and non-Abelian phases. Figure 11. The splitting in energy between the degenerate homology sectors, measured by the difference between the sector with the highest energy and the sector with lowest energy, vanishes exponentially as the system size N=N a =N c increases. The largest system size corresponds to over 5000 spins of the original honeycomb lattice; beyond that the numerical precision starts competing with the ground state splitting.
We used this method to calculate the degeneracy of the system on lattices with genus g=2, 3, 4, 5 and 6 in both the Abelian (for even and odd sizes) and non-Abelian phases. We have summarized the results in table 1. Kitaev showed using perturbation theory that the honeycomb model in the Abelian phase is equivalent to the toric code which can be shown to have a ground state degeneracy of 4 g . This agrees with our results for systems with even N a and N c . For systems where either N a or N c are odd we find the degeneracy is exactly half of 4 g . This can be attributed to the fact that the equivalent toric code in this case has a line defect in it like the one discussed in [18]. Our results for the non-Abelian phase agree with a formula discussed by Oshikawa et al [19] who showed that the bosonic Pfaffian state, which belongs to the same universality class, has a ground state degeneracy + -( ) 2 2 1 g g 1 given by the number of even spin structures on a surface of genus g [20].
Conclusion
In summary, we realized the Kitaev honeycomb model on surfaces with genus g2 by introducing extrinsic defects to the underlying lattice. This required a non-trivial generalization of the exact solution of the model to include extra loop symmetries associated with homologically non-trivial loops which are introduced by increasing the genus of the lattice. We also highlight the dependence of the fermion parity on both the vortex and loop symmetries of the model for various lattice dimensions. The generalized solution was then used to calculate the ground states in both the Abelian and non-Abelian phases of the model. The degree of degeneracy of these ground states in both topological phases are in accord with available theoretical predictions based on topological quantum field theory.
Our work provides a direct realization of two distinct topological quantum field theories, specifically the Abelian doubled- 2 and non-Abelian Ising theory, on closed surfaces of higher genus. As such it provides a solid basis for further investigation of the model on various manifolds, including also manifolds with boundaries which would extend previous studies of the Kitaev model [21]. Recent works on time-dependent simulation of creation and annihilation of vortex-like excitation on defects in the Kitaev model on torus [10] suggest the possibility of a dynamical process where creation and annihilation of extrinsic defects would result in dynamical change of the model genus and thus its topology. Interestingly this incarnation of topological field theory would be close to its axiomatic definition as a modular functor from a monoidal category of cobordisms to that of vector spaces [4,5]. | 8,110.2 | 2018-05-09T00:00:00.000 | [
"Physics"
] |
Models and data quality in information systems applicable in the mining industry
. The purpose of this article is to present modern approaches to data storage and processing, as well as technologies to achieve the quality of data needed for specific purposes in the mining industry. The data format looks at NoSQL and NewSQL technologies, with the focus shifting from the use of common solutions (traditional RDBMS) to specific ones aimed at integrating data into industrial information systems. The information systems used in the mining industry are characterized by their specificity and diversity, which is a prerequisite for the integration of NoSQL data models in it due to their flexibility. In modern industrial information systems, data is considered high-quality if it actually reflects the described object and serves to make effective management decisions. The article also discusses the criteria for data quality from the point of view of information technology and that of its users. Technologies are also presented, providing an optimal set of necessary functions that ensure the desired quality of data in the information systems applicable in the industry. The format and quality of data in client-server based information systems is of particular importance, especially in the dynamics of data input and processing in information systems used in the mining industry.
Introduction
Modern databases, which are the basis of information systems, operate with different data models. The aim is to represent them to describe the described real objects as accurately as possible, and the challenge is at the same time for the data form to allow their online processing in real time (Fig. 1). In general, the evolution of database management systems (DBMS) can be described in three stages: x Navigation systems -those were used in the 1960s and represented hierarchical and network models of data description; x Relational -those were created in the 1970s and are used to this day. They are based on set theory and on relational algebra. The objects are described in the form of two-dimensional tables allowing for connections (relations) between them. They use the SQL programming language; x Post-relational -this category comprises a wide variety of data description methods. The objectoriented model was introduced in the 1980s, and the NoSQL and the NewSQL models have become popular in the recent decade. Over the past 10 years, NoSQL and NewSQL models have become popular, which are targeting for a specific problem, such as short-term OLTP (Online Transaction Processing) operations.
At the same time, the information in them should be as high-up to date, accurate and sufficiently comprehensive as possible to enable maximum effective solutions.
In the mining industry, the processes are in continuous dynamics, mutually connected, and each of them can affect the operation of the whole system, depend also on the natural resources and require large investments in resources and funds. Moreover, the majority of the tasks in the modern mining industry are characterized by a pronounced uncertainty, non-linearity and multifactorial. [1] In this case, an unfortunate decision taken based on poor quality information can lead to huge losses for the particular enterprise.
In order to avoid such situations, it is especially important to obtain quality data, i.e. data meeting the requirements of the specific information system. The format and quality of the data is directly dependent on the purposes for which they will be used [2], and from the point of view of information systems the format and quality of the data is part of the whole process of data management.
Modern data models
Standard relational databases were not designed to handle the scale (Big Data), flexibility and real-time operation that are required by modern information systems. In addition, they do not take full advantage of the low cost of storage devices, nor of the high performance of the machines we have at our disposal nowadays.
NoSQL encompasses a wide variety of database technologies that have been developed in response to the increasing amount of data stored for users, objects and products, the frequency with which this data is accessed, as well as the need of high performance in their processing.
The first NoSQL software appeared in the early 21 st century: MongoDB (2009), Redis (2009), Cassandra (2008), etc. Today there is a wide variety of data models used in NoSQL systems. The most popular are shown in Fig. 2: x Key-value: here, information is stored in records of the "key-value" type and complex data structures, including XML, can be stored as "value". x flexibility -they do not work with static schemes; x scalability -they also allow for horizontal scaling; x facilitated database transfer across multiple servers. The biggest drawback to NoSQL systems is that they are not transitive.
Typically, NoSQL databases are used in distributed systems information systems, where the emphasis is on productivity in processing large volumes of data, which makes them applicable to information systems in the mining industry.
In such systems, the CAP theorem (Brewer's theorem) is observed [3]: "In a distributed system, at most two of the categories can be satisfied: x Consistency (C): all database clients see the same information, even with competitive updates; x Availability (A): all database clients can access any version of the information; x Partition tolerance (P): The database can be partitioned over multiple servers. The simultaneous provision of all three guarantees is impossible ( Figure 3). The theorem proves that only two of the three pillars can be used to create such a system, i.e. we may have a system with high consistency and expandability, a system with high data availability and expandability, or a system with high consistency and high availability, but without expandability.
Most NoSQL databases operate on the BASE (Basically Available, Soft-state, Eventual consistency) principle: choosing availability and partitioning at the expense of consistency and looking for the fastest and most reliable synchronization among individual servers.
NoSQL databases still have limited application in specific areas, but the fact that they are used by IT giants like Google, Facebook, Amazon, and LinkedIn is a proof about their potential.
Numerous comparative analyzes of the performance of RDBMS and NoSQL have shown that, in general, NoSQL systems perform better when recording, deleting, and updating Big Data sets than are common to information systems used to manage mining processes.
NewSQL databases have been talked about for the last few years. The term NewSQL was first proposed by Aslett [4]. These are actually databases that combine the advantages of SQL and NoSQL databases (Fig. 3), as NewSQL are transitive and horizontally and vertically extensible. The products described as NewSQL databases are very diverse, but three main types can be classified: x SQL engines: highly optimized storage engines for SQL (examples MySQL Cluster, Infobright, TokuDB); x New architectures: databases that were designed to operate in a distributed cluster (examples Google Spanner, Clustrix, VoltDB, MemSQL); x Transparent sharding: they provide a sharding middleware layer to automatically split databases across multiple nodes (ScaleBase). The coal of NewSQL databases is to provide a highperformance and affordable solution for processing large volumes of data without compromising data consistency and high-speed transaction capabilities, making them very efficient and applicable to some processes in the mining industry, which are almost completely automated.
They are best used in the control of enrichment processes, where the data are very high frequency -the sensors (express analyzers) continuously provide information at intervals of up to 2 minutes, which requires the supply of appropriate reagents to obtain the desired content of ore concentrate.
Although in recent years many analytical comparisons have been made between SQL and NoSQL databases [5,6], today the choice of which data model to use is determined mainly by the specific conditions and tasks.
Data quality assurance technologies
Data quality is a characteristic that shows the extent to which they are analyzed and meet the needs of the business to make informed and effective decisions. From an information technology perspective, data quality is part of the whole data management process.
The criteria determining whether we operate with quality data can be considered according to the requirements of information systems and from the point of view of their users.
The requirement for the use of high quality data in information systems is that they meet at least five main criteria [7] -completeness, accuracy, validity, consistency and timeliness (figure 5).
Unlike standard data collection (on paper), information technologies make it possible to ensure the completeness of data by using functions that allow the input and digital storage of information only, where all attributes for the object, activity etc. have been introduced.
To ensure full quality data, additional features are introduced that check not only the correctness of the data provided but also the exact implementation of the data entry format defined by the particular information system. Accuracy of data criterion suggests that incoming data in the information system are correct and fully reflect the depicted object, process, etc. To avoid the risk of inaccurate data submission, the interference of the human factor in this activity should be minimised already at the design stage of a specific information system. Unfortunately, this is almost impossible, and therefore, the implementation of this activity must be done by competent and well trained specialists.
To ensure the data accuracy, especially in cases of a high volume or a continuous stream of data, additional features are being set in the information systems which check for inaccuracies at every step and eliminate admission of such.
The criterion validity of the data determines how data values are correctly measured according to the pre-set conditions. If we have received invalid data, this means that there is a problem in the process of collecting the data.
When you get values for specific data that are beyond the limits of the usual, it does not always mean that they are invalid. In such a case the values should be rechecked. In the flexible information systems this problem is easily solved by altering the defined limits for measured values and incorporating new values.
In information systems, especially in those with longer term of use, there are data about the same object, process, action, etc., that are introduced at certain periods and have different values. In other words, there are different versions of the data for an object or process.
The consistency criterion ensures that the data in the various versions are saved in the same format and in the most important this data format is not changed during processing.
In order for an adequate and efficient decision to be made, it is important that the data we need to analyse should be timely -i.e. there is no time interruption of the incoming data stream for various reasons.
The timeliness criterion is especially important in industrial systems, which manage continuous production processes because the lack of data for a specific segment of time can lead to incorrect management decisions.
From the point of view of data users, the criteria for data quality can be considered conditionally in four major groups -availability, usability, comprehensibility and security ( figure 6). Availability of data means that in every moment, when appropriate, users need, to have access to them and they are always available.
In information systems, basic characteristics about the availability of data are accessibility, authentication, authorisation, and timeliness of equivalence.
In the client-server technology used by modern information systems by design levels of access to a specific collection of data are defined and an access level is assigned to every particular user, which determines what kind of data to be submitted. Various collections (databases) available for specific levels may exist. An example in this respect are geographical information systems [8], where there is different accuracy (data quality) depending on the type and level of access.
Depending on the specific level of access, it is verified if that user has permission (authentication) to use the information resource (i.e. to a lower or higher quality data). Authorisation is performed by the information system itself, as it gives the user rights to perform the permitted set of actions.
Since a large part of the information systems, including industrial ones, are used by many users, and different users can enter information, the equivalence of data is of particular importance. It measures the extent to which equality (equal values) of the same data is guaranteed.
The timeliness guarantees users that data are timely (as timely as possible), which is essential in making effective decisions.
The usability criterion means that data incoming in the information system from different sources can be processed and analysed.
The data characteristics that determine their usability are documentation, validity, applicability, precision, flexibility and interactivity.
The most important feature of usability of incoming data is their ability to be converted into a digital format by the information system, i.e. they can be formalised by meeting their set conservation model [9].
The validity of the data is determined by comparing the relevance to the requirements set for the specific information system.
Applicability is a characteristic that determines how much data can be processed and analysed in support of specific targets. In order to have adequate solutions taken on the basis of the data it is necessary to have precise data -i.e. they need to have values in the range specified in the information system. Thus, the level of detail of the data, which is required by different groups of users and management levels, is defined. The too high level of refinement and detail of data often leads to difficulties in the operation of information systems and it is therefore necessary to find a level of balance that satisfies both these two characteristics.
Data security assures the users that they are provided with the requested information in an accessible form and the data origin is guaranteed.
The main features ensuring data security are standardisation, reliability, comprehensiveness, integrity, objectivity, comparability and stability.
Standardisation ensures that the data submitted and processed correspond to the rules set in each information system, which in some cases are valid for different information systems that share and exchange information. This data feature is set in the design process of the relevant information system and is monitored throughout its entire life cycle.
Nowadays, the reliability of data is a key feature not only for information systems but also for society as a whole. They give confidence about the source of the data and its reputation, which determines the degree of confidence in the data. Comprehensiveness is a complementary feature that determines to what extent the data is satisfactory and covers the user's request. Data integrity is one of the most important features of data, especially in an insecure environment such as the Internet, because it ensures that changes to data are made only by authorised users. The objectivity feature of the data ensures that the data are not modified under the influence of human emotions, i.e. only the specific facts about the data are reflected.
Naturally, one of the most important features of data is the ability to be permanently stored and accessible over a long period of time to ensure its stability.
Information technology uses many different techniques that guarantee the use of only high quality data. As particularly critical in this regard we can identify technologies that provide standardization, profiling, matching, control and cleaning of real-time data (Fig. 7), which are particularly important in ensuring the operation of information systems in the mining industry.
Data standardization is a technology that operates on the basis of established rules and criteria, ensuring the desired quality. The received data goes through various transformation processes in order to comply with the rules set in the specific information system. [10] Additional functions must be included, allowing automatic correction in the presence of minimal inaccuracies or rejection of data in case of significant discrepancies.
Data standardization is especially important in ERP systems, where information comes from different sources. This technology is also essential when we have data exchange between different information systems with diverse databases, such as those used in the management of various processes in the mining industry. Data profiling is a technology used to analyses the content, quality, and structure of output data, and is used in various criteria for data quality, such as determining their accuracy and completeness.
Data sources are considered, with an initial assessment of the data to identify potential and actual deficiencies. The goal is to find out the wrong areas in the data organization that can be found in user input, interface errors, data corruption when transferring, and so on. The use of this technique significantly improves data quality.
Data matching aims to find records that relate to the same object, process, individual, and so on. It can be done in many different ways, but the process is often based on algorithms or programmed circuits, where processors perform sequential analyzes of each individual data set, comparing it to each individual part of another data set or comparing complex variables for finding strings containing specific similarities. In the paper of I. Getova [11] present innovative test and evaluation model which gives a probability assessment obtained learning of the lectured material by the learners and provides information on how much the learner perceives the new material and how well the lecturer has presented it in a way can be readily understand it of students. The analysis in this article is performed on a set of data collected by two universities in Bulgaria using the IBM statistical analysis program -SPSS.
Data control is a set of technologies that monitor changes in data quality over time and report deviations from predefined quality indicators. The control of the data is realized through various software tools (drop-down menu, mandatory field, etc.), which monitor and guarantee the completeness, accuracy, validity, timeliness and other quality characteristics of the submitted data.
The timeliness of data in information technology is most easily ensured through cloud structures where all data about a particular object, process, individual, automatically transferred to the cloud once a process is complete and immediately available to all users authorized to work with them.
The data cleaning process monitors for incorrect, incomplete or inaccurate data and ensures that all obsolete or non-compliant data quality criteria are removed.
In modern information systems, software tools for quality control and data cleaning are built into the respective input modules, which allows them to work in real time. In this way, incomplete, inaccurate and outdated data are not allowed to enter, which maximally supports the making of the right management decisions. This is the process that ensures that the data is correct, consistent and applicable. Data clearing is important because it improves data quality by removing any obsolete or incorrect data and leaving the highest quality information.
For the data to be used by different management levels (different user groups) and to be available on different devices (PC, Tablet, Smartphone), it is necessary to possess flexibility, which is particularly important in ERP systems in the mining industry. This means that they are subject to processes for different organizational changes or reengineering with minimal modification of the existing objects and relations in them. The use of information systems through the Internet or in a network mode requires the data to be interactive -that is, to have two-way communication between the data and users.
Conclusion
Although relational databases are still widely used in the mining industry, with the increasing volume of processed data distributed in the Web environment and the introduction of the Internet of Things, they are finding it increasingly difficult to handle large real-time data sets. NewSQL databases still offer partial solutions, but NoSQL has already established itself in certain areas as a better solution than classic RDBMS.
The information systems used in the mining industry are characterized by their specificity and diversity both for the type of mineral deposit (each deposit is unique) and compliance with the requirements of the specific company [9,12,13], which is a prerequisite for the integration of NoSQL data models in it due to their flexibility.
More and more mining companies plan, manage and control their activities, using specialized information systems adapted to their conditions and requirements. Since many large mining companies, incl. and in Bulgaria they are already building their own cloud structures, using information from different types and models of databases, the technologies guaranteeing the processing of high quality data are of special importance.
However, due to the diversity of the software tools used, the implementation of all criteria for high quality data proves to be a difficult problem to implement, as an optimal balance between all criteria is sought.
For this reason, each mining company, depending on its requirements and available software tools, determines which quality criteria are most important for its work at a particular time, and this process is dynamic with the introduction of new information technologies. | 4,842.6 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Fault Diagnosis of Oil-immersed Power Transformer Based on Difference-mutation Brain Storm Optimized Catboost Model
To address the problem of low accuracy of power transformer fault diagnosis, this study proposed a transformer fault diagnosis method based on DBSO-CatBoost model. Based on data feature extraction, this method adopted DBSO (Difference-mutation Brain Storm Optimization) algorithm to optimize CatBoost model and diagnose faults. First, for data preprocessing, the ratio method was introduced to add features to the original data, the SHAP (Shapley Additive Explanations) method was applied for feature extraction, and the KPCA (Kernel Principal Component Analysis) algorithm was employed to reduce the dimension of data. Subsequently, the preprocessed data were inputted into the CatBoost model for training, and the DBSO algorithm was adopted to optimize the parameters of the CatBoost model to yield the optimal model. Lastly, the DBSO-CatBoost model was exploited to diagnose the transformer fault and output the fault type. As indicated from the example results, the accuracy of the transformer fault diagnosis based on DBSO-Catboost model could be 93.71%, 3.958% higher than that of CatBoost model and significantly exceeding that of some common models. Furthermore, compared with other preprocessing methods, the accuracy of fault diagnosis by employing the data preprocessing method proposed in this study was significantly improved.
I. INTRODUCTION
Transformer is vital equipment of power system, capable of achieving voltage transformation, power distribution and power transmission. Its safe and reliable operation is correlated with the safety and power supply quality of the whole power grid. Accordingly, accurate diagnosis of transformer faults is critical to maintaining the safe operation of power grid and ensuring the quality of power supply [1]- [5].
The causes and types of power transformer faults are difficult to directly detect. Currently, Dissolved Gas Analysis (DGA) has been the most common fault diagnosis method. When the power transformer is overheated and discharged, its insulating oil will emit gases that dissolves in the oil. By analyzing the dissolved gas, the DGA method can determine the operating condition of the transformer. Conventional DGA methods consist of three-ratio method, Rogers ratio method and non-coding ratio method [6]- [9]. The mentioned methods exploit the relative content of dissolved gas to determine the fault type, and the calculations are simple. However, the classification effect of data close to the threshold is relatively poor, and there are common 'missing code' or 'super code' phenomena [10]- [12].
Over the past few years, as artificial intelligence is leaping forward, several intelligent algorithms combined with DGA method are applied to the fault diagnosis of power transformers. On the whole, the mentioned intelligent algorithms fall to non-ensemble learning and ensemble learning. Non-ensemble learning algorithms consist of BP neural network, support vector machine, extreme learning machine and others, each of which exhibits certain advantages, whereas some problems remain unsolved [13]- [17]. Zhang et al. combined the optimized BP neural network with DGA method to increase the accuracy of transformer fault detection to a certain extent, while defects remain (e.g., slow training speed and difficult parameter determination) [18]. Huang Tongxiang et al. used a support vector machine for transformer fault diagnosis. Such a machine exhibited strong learning generalization ability, whereas the accuracy is not high when there are many fault types and information is missing [19]. Du Wenxia et al. used an extreme learning machine for transformer fault diagnosis, which exhibited the advantages of fast learning speed and high generalization performance. In the diagnosis process, however, hidden layer neurons are prone to redundancy and classification accuracy decline [20].
The ensemble learning algorithm integrates multiple learners and exhibits higher learning performance. Gradient Boosting Decision Tree (GBDT) is a branch of the ensemble learning algorithm, which reduces the total error by decreasing the deviation and raises lower requirements for parameter adjustment and better robustness. GBDT is extensively adopted in transportation, medical, financial and other fields, whereas it has been rarely applied in power system fault diagnosis. Liao Weihan et al. built an oil-immersed transformer fault diagnosis model based on GBDT, Li Hejian et al. investigated an oil-immersed transformer fault diagnosis method by complying with extreme gradient lifting. As demonstrated from the comparative experiments of two literatures, the accuracy of the transformer fault diagnosis based on GBDT could be higher than that of non-ensemble learning algorithm [21], [22].
CatBoost is a machine learning library based on GBDT framework, which was proposed by Yandex in 2017. CatBoost, as compared with XGBoost, LightGBM and other GBDT algorithms, has been improved in numerous manners. It addresses the problem of gradient deviation in the iteration by complying with orderly principle, orderly enhancement algorithm and greedy strategy. In addition, it is capable of reducing the possibility of over-fitting of the model, increasing the execution speed of the model, improving the robustness of the model, and further increasing the prediction accuracy. On the whole, the performance of CatBoost is determined by the appropriate hyper-parameter set [23]- [26]. At present, the hyper-parameter optimization of the ensemble learning model largely adopts the grid search method, so the parameter set should be traversed. As impacted by the considerable parameters, the efficiency is low, and even the dimension explosion is triggered. Thus, the optimization algorithm should be applied for super-parametric optimization [27]- [29].
Brain Storm Optimization Algorithm (BSO) simulates the process of human creative thinking to tackle down problems, and it exhibits a strong global and local search ability [30]- [33].
Brainstorm optimization algorithm and optimized algorithm have exhibited prominent performance in numerous fields (e.g., medical image registration, image segmentation, engine parameter prediction, data feature selection and multiobjective optimization [34]- [38]). Many scholars have optimized the brainstorming optimization algorithm [38]- [42] to form various variants of brainstorming optimization algorithm, as an attempt to improve the performance of the algorithm [39]- [42]. ZHU H Y et al. proposed using kmedians algorithm for clustering, as an attempt to avoid the weaknesses attributed to outliers in k-means clustering, while increasing the algorithm speed [43]. Pourpanah F et al. extended BSO to an adaptive algorithm based on multiple groups, thereby improving the mutation effect of BSO, whereas the effect on multi-parameter optimization was insignificant [44]. In this study, the difference-mutation Brain Storm Optimization Algorithm (DBSO) replaced the Gaussian mutation of the BSO algorithm by complying with the BSO algorithm, which could improve the convergence rate and especially apply to the hyper-parametric optimization of the ensemble learning model [45].
As chromatographic technology has been advancing over the past few years, the detection of gas composition and concentration turns out to be rapid and accurate [46], [47]. Accordingly, in this study, the chromatographic technology acted as the vital technology of the transformer fault diagnosis. The chromatographic technology was employed to detect the transformer oil of the respective fault type, and the relevant data information was acquired. A series of preprocessing was performed on the data, and the data characteristics were extracted and normalized. A variety of fault identification models were built and classified for the processed data [6]- [12].
This study proposed a transformer fault diagnosis method based on DBSO-CatBoost. First, the dissolved gas data in transformer insulation oil were preprocessed by feature extraction, dimension reduction and normalization. Subsequently, the CatBoost model optimized by DBSO algorithm was built. Next, the processed data were trained and tested by using DBSO-CatBoost model. Lastly, the running state of the transformer was determined, and the power transformer faults were accurately diagnosed. This study builds various classification and recognition models, compares multiple models, and lastly develops a more suitable classification model for the transformer fault diagnosis. In the end, the whole study is summarized.
A. CATBOOST MODEL
CatBoost is a machine learning library supporting categorical variables, which complies with the GBDT algorithm framework. It is capable of effectively solving various data migration problems in the original GBDT, while exhibiting the advantages of fewer parameters, high accuracy and good robustness [23]- [24].
1) GBDT ALGORITHM
Ensemble learning builds multiple machine learners, trains them to form multiple weak learners, and combines multiple weak learners via some combination strategies to form a strong learner. Fig. 1 The algorithm is a framework algorithm of ensemble learning, with a basic idea to exploit the basic classification weak learner to obtain a strong learner by linear weighting and iterative training.
GBDT algorithm acts as an ensemble learning algorithm based on Boosting algorithm, which combines gradient lifting algorithm and decision tree. The model is an additive model, the learning algorithm is forward step-by-step algorithm, and the basis function is CART tree.
The concrete steps of GBDT algorithm are elucidated below: Step 1. Initializing the weak learner: Where L( , c) denotes the loss function; represents the first prediction target; c expresses the parameter with the least square loss function.
Step 2. Calculate the negative gradient of the current loss function as sample residuals: Step3. With ( , ) i im x r as the training set of the next tree, fitting a CART regression tree, get the leaf node set jm R , Leaf nodes 1, 2,..., j J = , J represent the number of leaf nodes in the regression tree.
Step4. Calculate the minimum loss function for leaf node j : i jm Where γ denotes the parameter of the respective leaf node.
Step 5. Update the strong learner: Where jm I represents the jth regression tree.
Step5: Combine weak learner to form strong learner: 2) CATBOOST ALGORITHM The prediction model in GBDT algorithm is determined by the target variables of training samples, and there is an over-fitting problem attributed to biased point-state gradient estimation. Catboost algorithm is an improvement based on GBDT framework, which can effectively address the mentioned problems [48]. Compared with other GBDT algorithms (e.g., XGBoost and LightGBM), Catboost has been optimized in numerous aspects. First, CatBoost adopts the 'ordered principle' to avoid the conditional displacement issue inherent in the iteration of GBDT algorithm, while making it possible to exploit the whole data set for training and learning. Second, CatBoost transforms the conventional gradient enhancement algorithm into Ordered Boosting algorithm, thereby solving the inevitable problem of gradient offset in the iteration, improving the generalization ability, reducing the possibility of overfitting and enhancing the robustness of the model. Lastly, CatBoost builds the combination of classification features through greedy strategy, and takes the mentioned combinations as additional features, which makes it easier for the model to capture high-order dependencies and improve the prediction accuracy more significantly. Furthermore, CatBoost selects the forgetting decision tree as the basic prediction period, thereby reducing the possibility of overfitting and increasing the execution speed of the model [25]- [29].
Set the dataset to: Where i = 1,2 … n, n is the number of sample groups. The respective group of samples x i m is the first feature vector of group i samples. Y i denotes the label value. The main methods of CatBoost algorithm include: Multiple rankings are randomly generated for learning, the same class samples are found under the respective feature, and the classification feature conversion value is calculated: where φ denotes the indicator function, which is 1 when is satisfied; otherwise, it is 0.p is a priori value.α is a priori weight.
The respective group of samples X i in the training set has a model obtained by training the other training sets without X i . The combination of classification features is built in accordance with greedy strategy, and the tree structure is selected. The Ordered Boosting algorithm is adopted to calculate the gradient of X i , and the gradient is employed to train the weak learner. Besides, the final model is developed by weighting.
1) BSO ALGOAITHM
Brain Storm Optimization Algorithm (BSO) is an intelligent algorithm proposed by Professor Shi Yuhui in 2011, largely simulating the group behavior in human creative problem solving. It exploits the clustering idea to search the local optimum, while obtaining the global optimum by comparing the local optimum [49]. The mutation idea complicates the algorithm and avoids the algorithm falling into local optimum, which applies to solving the multi-peak high-dimensional function problem.
The BSO algorithm mainly comprises the steps below: ① Initialize the population. ② Individual evaluation and clustering. ③ Selecting cluster centers. ④ New individuals are generated through variation and then updated. ⑤ If the maximum number of iterations is reached, the optimal individual is outputted; otherwise, it is transferred to the second step.
The main part of BSO algorithm is clustering and mutation [50].
BSO employs K-means clustering algorithm to cluster individuals into k categories in accordance with the distance between individuals, while taking the individuals with the optimal fitness function value as the clustering center. To prevent falling into local optimum, the mutation individuals generated by probability replace one of the clustering centers.
BSO variation covers four major ways: (1) adding random disturbance to a random class center, i.e., the optimal individual of this class, to generate new individuals; (2) randomly selecting an individual in a random class to add random perturbations for generating novel individuals.
(3) randomly fusing two class centers and adding random perturbation to generate novel individuals; (4) randomly fusing two random individuals in the two classes, while adding random disturbance to generate new individuals.
2) DBSO ALGOAITHM (DIFFERENCE-MUTATION BRAIN STORM OPTIMIZATION)
For several complex optimization problems, BSO algorithm exhibits slow convergence speed or premature problem. To improve the optimization performance, this study adopted DBSO algorithm to optimize the parameters of CatBoost model.
The DBSO algorithm exhibits the identical overall structure to the classical BSO algorithm, whereas the difference mutation is applied, other than the Gaussian mutation in the fourth step.
The classical BSO algorithm applies Gaussian mutation, and the novel individual generation equation is expressed as: where nd where T and t respectively represent the maximum number of iterations and the current number of iterations. k could adjust the slope of the lg () sig function, and (0,1) R is a random value from 0 to 1.
In this variation, the requirements can be met at the early stage, whereas the coefficient of variation of Gaussian variation tends to be fixed at the subsequent stage, so it cannot well capture the search characteristics [45].Thus, DBSO algorithm adopts differential mutation.
In human brain storms, everyone' s ideas at the early stage will be significantly different. Differences in existing ideas should be considered when creating novel ideas. Accordingly, DBSO algorithm determines the mutation step by differential mutation. The specific operation is defined as follows: x express two different individuals selected in the contemporary global. According to Eq. (10), compared with Gaussian variation, the calculation amount of the above differential variation is significantly reduced. Moreover, since the variation could be adaptively adjusted by complying with the dispersion degree of individuals in the group, it could more effectively share information and improve the search efficiency. Thus, compared with the BSO algorithm, DBSO algorithm could better balance local search and global search, and improve the algorithm performance.
III. TRANSFORMER FAULT DIAGNOSIS MODEL BASED ON DBSO-CATBOOST
This study adopted CatBoost model to diagnose transformer faults. As impacted by some parameters of Catboost model under the default value, there would be overfitting or underfitting. If manually adjusted, it would be timeconsuming to find the optimal value. Accordingly, DBSO optimization algorithm is adopted to optimize the parameters of Catboost model to improve the performance of diagnosis model. For transformer fault diagnosis, this study built a DBSO-CatBoost model (Fig. 2). The transformer fault diagnosis model based on DBSO-CatBoost primarily comprises data preprocessing, DBSO optimization and fault diagnosis. Data preprocessing mainly covers feature extraction, dimension reduction and normalization of the collected DGA sample data, as well as sequence division. The DBSO optimization part exploits the DBSO model to optimize several parameters of the CatBoost model to obtain the optimal parameters. The model training and testing part is training and testing CatBoost model, while outputting transformer fault types and assessing the model.
A. DATA ACQUISITION
The data in this study was provided by a power grid in the northwest of the State Grid Corporation of China, and 2 H , 4 CH , 2 6 C H , 2 4 C H , 2 2 C H were selected as the attributes of the transformer fault diagnosis, including 381 groups of fault data. The three-dimensional view of the data was shown in Fig. 3.
Fig. 3. 3D view of DGA data
According to Fig. 3, any single feature with large difference in DGA data cannot accurately determine a fault type of transformer, and there were some coupling relationships between the feature attributes of the data, so it was necessary to extract the feature of the data [51], [52].
1) FEATURE EXTRACTION
According to GB-T 7252 -2016 Guidelines for Analysis and Judgment of Dissolved Gases in Transformer Oil, the gas production rate of transformer insulating oil is correlated with the fault type of transformer, i.e., the fault type of transformer is correlated with the ratio of the respective gas concentration. Thus, the ratio between the characteristics of transformer fault types and input attributes was related. The common three-ratio method and non-coding method could roughly determine a fault type of transformer independently [6]- [9], so the characteristic variables generated by the interactive ratio method of input attributes exerted the decoupling effect on transformer fault diagnosis data.
The common three-ratio method and non-coding method could roughly determine a certain fault type of transformer separately, whereas the characteristic dimension generated by them cannot completely decouple the data. To achieve better decoupling effect, this study selected to traverse the data attribute ratio. The selected data feature variables were mainly composed of component concentration and its ergodic ratio. The form of ergodic ratio is determined by the formula below. DGA data had five-dimensional attributes, so the interactive ratio of data attributes is expressed below: ). Using enumeration algorithm, the new 145dimensional feature variables were obtained by traversing all the permutations and combinations of four groups, and the original 5-dimensional feature variables were added to 150dimensional data feature variables.
Since some data in the collected DGA data were zero, the feature attributes added by the ratio method achieved the case of zero division, so abnormal data would be generated.
On the whole, the processing methods of abnormal data comprised Laida criterion filling and fixed value filling. The DGA data was excessively scattered, and the data level difference was significant. Filling with the Layida criterion would eliminate most of the data, so this method was not suitable for DGA data. Here, the fixed-value filling method was used to process the abnormal data.
Each of the 150-dimensional data feature variables had different contributions to the sample, and the addition of some variables sometimes increased the complexity of the model, while affecting the accuracy of the model. Accordingly, the Shapley Additive Interpretation (SHAP) method was used for feature extraction. The SHAP value method builds an additive explanatory model. The core idea was to calculate the marginal contribution of features to the output of the model, and then explain the black box model from the global and local levels. All features were regarded as ' contributors '. For the respective prediction sample, the model produces a predictive value, and the SHAP value was the value assigned to the respective feature in the sample [53].
The SHAP values of the respective feature were calculated for 150-dimensional feature variables, and the feature density scatter plot was made. The respective row in the beeswarm graph represents a feature. Considerable samples were gathered in a wide area, and the abscissa was the SHAP value. A point represents a sample, and the color of the point represents the relative value of the point. The redder the color, the greater the blue would be, and the smaller the color would be. The ordinates in Fig. 4 were sorted by descending order of the average absolute value of SHAP value, and the first 20 characteristics of the intermediate temperature overheating category were taken as beeswarm diagram (Fig. 4). C H , 2 4 C H , 2 2 C H , respectively. Fig. 4 shows that the average absolute value of SHAP value of 2 2 C H was the largest, 2 2 C H has the greatest impact on the classification of samples. In addition, 2 H , 4 CH , 2 6 C H , 2 4 C H was also very important for sample classification [54].
The beeswarm graph only visualizes the SHAP values of all samples in one category, which does not represent the interpretability of the overall model. For the multiclassification situation in this study, the mean of the average absolute value of SHAP in the respective classification was taken to obtain the overall average absolute value of SHAP, and the sample characteristics were used to influence the histogram [55].
In the histogram, the respective column represents a feature. C H was the largest, which has the greatest impact on data classification. According to the curve in Fig. 5, the average absolute value of the top 60 cumulative SHAP in the figure took up nearly 90 %, so the mentioned 60 features were taken as the attributes of the data.
Principal component analysis (PCA) maps the original variables to a new variable space. In the new variable space, several variables could be used to replace the original variables, and the data content of the original variables could be retained as much as possible. The new variables were orthogonal to each other to eliminate the collinearity of the original variables.
Kernel principal component analysis (KPCA) achieved the nonlinear mapping of data by mapping the original data to a higher dimensional space, and then employed principal component analysis to reduce the linear dimension of data from high dimensions [56]- [58].
PCA, PLS and KPCA were adopted to reduce the dimension of the data, and the results were shown in Fig. 6 shows that the cumulative contribution increases with the increase in the dimension, and no longer increases after reaching 100 %. The cumulative contribution of KPCA was obviously higher than other dimension reduction algorithms. When the dimension was 7, the cumulative contribution rate of KPCA was 99.9 %, while the cumulative contribution rates of PCA and PLS did not reach 90 %. Subsequently, with the increase in the dimension, the cumulative contribution rate of KPCA increased slightly, and the training time of model increased with the increase in the dimension.
According to Fig. 6, KPCA was significantly better than the other algorithms, so this study uses KPCA algorithm to reduce the data dimension to 7 dimensions.
3) DATA NORMALIZATION
The difference of DGA data was large, affecting the processing speed of the model, so the data normalization processing [61]. In this study, the interval value method was used to normalize the data, so that the data was scaled to a specific interval in proportion to avoid the interaction between values. Here, the extreme value method was selected for linear function transformation: ( 1, 2,..., ) i n = denotes the normalized data, and the mapping interval is [ -1,1 ]. i X represents the original data. max i X denotes the maximum value in the data sample. min i X expresses the minimum value in the data sample. The normalized data after dimension reduction could be inputted to train and test the model.
C. FAULT STATE CODING AND SEQUENCE DIVISION
The output result of the diagnosis model was the fault type of the transformer. According to GB-T 7252-2016 transformer oil dissolved gas analysis and judgment guidelines, this study takes low temperature overheating, medium temperature overheating, high temperature overheating, partial discharge, low energy discharge, high energy discharge and normal operation as the output characteristics of the transformer fault diagnosis. In this study, a training set, a validation set and a test set were set at a ratio of 3:1:1. The number of fault state codes and their corresponding sequences was shown in Table I.
D. COMPARISON OF MULTI-MODEL DIAGNOSIS RESULTS
For the preprocessed data, six models, including extreme learning machine (ELM), support vector machine (SVM), GRNN, random forest (RF), XGboost and Catboost, were used for fault diagnosis to test the performance of various models for transformer fault diagnosis. The diagnosis results were shown in Fig. 7. According to Fig. 7, the overall accuracy of Catboost was the highest in the six models, and the SVM algorithm was the highest in the single learner. The accuracy of ensemble learning algorithm was higher than that of single learner. The specific accuracy of the respective model for each type was shown in Table II. According to Table II, the overall accuracy of the six models from low to high was GRNN, ELM, SVM, random forest, XGboost, Catboost. The overall accuracy of Catboost algorithm was the best when the empirical parameters were used, but the training time of ensemble learning algorithm was long. If the grid search traversal parameter adjustment method was used, the time required was too long, and the parameter adjustment range was relatively limited. However, single learner classification was not good. Compared with the single learner model, the ensemble learning model exhibited higher fault diagnosis accuracy for oil-immersed transformers.
E. COMPARISON OF PARAMETER OPTIMIZATION ALGORITHMS OF CATBOOST MODEL
The performance of Catboost model was better than other models. The training set of Catboost classification model was analyzed, and the data processed by ratio method combined with KPCA were used as input features. The diagnosis results of Catboost model training set and test set are presented in Fig. 8. Test Set Sample Number Test Set Sample Number In Fig.8, Catboost model uses default parameters. According to Fig.8, the Catboost model classification results were over-fitting, so the parameters of the Catboost model should be optimized.
If Catboost model adopted manual adjustment of parameters, it would not only take a long time to adjust parameters, but also find the global optimum of parameters. If the grid search method was used for parameter adjustment, the time required was too long and the range of parameter adjustment was limited. Accordingly, the optimization algorithm was used to adjust the parameters of Catboost model.
Catboost was trained by the gradient lifting method. In the respective iteration, the basis for producing a new learner was that the regularization objective function was the smallest, and the regularization parameter L2_leaf_reg was too large or too small, which would cause over-fitting or under-fitting of the model. The learning rate parameter learning _ rate was too small, and the gradient descent was too slow. Too large, it may cross the optimal value and produce oscillation. The iteration number parameter iteration was too small would cause underfitting, resulting in insufficient model solving ability. Too big would cause overfitting, resulting in a decline in generalization ability of the model. In addition, the random strength parameter random _ strength of the model was used to score the split tree, and improper selection would affect the learning ability and classification ability of the model [29]. Thus, this study selects the optimization algorithm to optimize the parameters of the above four Catboost models to improve the performance of the diagnosis model.
The common parameter optimization algorithms were particle swarm optimization (PSO), sparrow search algorithm (SSA) and so on. In this study, DBSO, BSO, PSO and SSA were used to optimize the four hyperparameters of Catboost model, and the results were compared [62]- [64].
The fitness function curve was made with the error rate of the classification results of the validation set as the fitness value. The fitness curve of the respective optimization algorithm was shown in Fig. 9. According to Fig. 9, the DBSO algorithm first reached the optimal result, and the number of iterations to achieve the optimal fitness was 11, and the fitness value at the optimal time was the same as that of SSA and BSO algorithms, which was 2.132 %. The final fitness value of PSO algorithm was the largest, and the optimization effect was the worst.
The Catboost model optimized by four algorithms was used for fault diagnosis, and the results were shown in Test Data According to Fig.10, the test set accuracy of DBSO-Catboost model, BSO-Catboost model and SSA-Catboost model was the same, which was higher than that of PSO-Catboost.
In summary, although the accuracy of DBSO-Catboost model was the same as that of the other two models, it could find the optimal point faster and the optimization effect was the best.
F. CASE DATA ANALYSIS
Using 381 sets of data collected to build the model, some sample data are listed in Table III . The above data is adopted to construct features by ratio method, then feature screening, KPCA dimensionality reduction, normalization, and finally DBSO-Catboost algorithm is applied for prediction. The results are listed in Table IV. According to Table III and Table IV, the proposed model achieves better accuracy than the traditional three-ratio method. According to the samples presented in Table IV, Catboost model and DBSO-Catboost model are used to analyze the confidence of the samples [65] [66].
The confidence of Catboost model is listed in Table V. The confidence of DBSO-Catboost model is shown in Table VI. According to Table V and Table VI, the confidence of the DBSO-Catboost model in the correct classification of samples is higher than that of the Catboost model, so the model classification method proposed here is effective.
1) DIAGNOSTIC RESULTS UNDER DIFFERENT PRETREATMENT METHODS
In this study, the ratio method was used to process the data and then the dimension reduction algorithm was used to reduce the dimension of the data. The original five-dimensional data, the data formed by the dimension reduction based on ratio method combined with KPCA, the data formed by the dimension reduction based on ratio method combined with PCA, and the data formed by the dimension reduction based on ratio method combined with PLS were used to form four different data sets with four different data processing methods. The DBSO-Catboost model was used to classify the four data. The classification results of the test set were shown in Fig. 11. According to Fig. 11, when the data were reduced to seven dimensions, the data classification effect of ratio method combined with PLS was the worst, and the classification effect of ratio method combined with KPCA was the best. In the case of DBSO-Catboost model, the accuracy of data after the dimension reduction based on ratio method combined with KPCA was 3.950 %, 10.526 % and 5.263 % higher than that of the ratio method combined with PCA, the ratio method combined with PLS, and the original five-dimensional data, respectively. Accordingly, the classification effect of the data processed by the ratio method and the KPCA dimension reduction algorithm was better than that of the original data.
In addition, when the classification algorithm is implemented, the precision, recall and F1 score of the model are the three main indicators to judge the classification effect of the model [67].
The recall rate is determined by: F1-Score, also known as the balanced F-fraction method, is calculated by : Where TP is true positive; FP is false positive; FN is false negative.
Taking the normal operation of the category as an example, the true positive represents: it is predicted as the correct number in the normal operation. False positive denotes the number of errors predicted in normal operation. False negative represents: the true value is the number of normal operation and prediction errors.
Macro-F1, i.e., the macro average method, is obtained by substituting the precision rate and recall rate of each transformer state into formula ( 14 ), and then the values of seven F1 -Scores are averaged.
Calculate the precision, recall, and F1-Score values of KPCA-DBSO-Catboost in Fig.11. The KPCA-DBSO-Catboost detailed prediction results are shown in Table VII. According to Table VII and formula 12-14, the precision, recall and F1-Score of KPCA-DBSO-Catboost method can be calculated. The details are shown in Table VIII. TABLE VIII DETAILED INFORMATION TABLE OF Therefore, combined with formula 12-14, Table 7 and Table 8, the F1-Score is 93.42 %, and the Macro-F1 value is 92.63 % by adding the F1-Score values of each class and dividing them into 7. The Macro-F1 value of Orginal-DBSO-Catboost model in Fig. 11 can be calculated by the same method, and it is found that it is less than 90 %. This shows that the KPCA-DBSO-Catboost model of transformer fault diagnosis classification method is effective.
2) COMPARISON OF DIAGNOSTIC RESULTS OF DIFFERENT MODELS
DBSO optimization algorithm is employed to classify ELM, SVM, GRNN, Random Forest, XGboost and Catboost. After the data is processed by ratio method and KPCA, the optimal classification model is built.
The optimization algorithm is adopted to optimize the initial weights and thresholds of ELM model. The penalty factor C and kernel function parameter g of SVM model are optimized by optimization algorithm. The smoothing factor of GRNN model is optimized by using the optimization algorithm. Optimization algorithm is used to optimize decision tree tree and split feature number of Random Forest [68]. The regression tree k, learning rate η, maximum regression tree depth ( max_depth ), regularization coefficient λ, min_chile_weight and minimum splitting gradient descent δ of XGboost model are optimized by optimization algorithm. The optimization algorithm is adopted to optimize the regularization coefficient L2_leaf_reg parameter of Catboost model, the random strength random_strength parameter of splitting tree score, the iteration number iteration parameter and the learning rate learning_rate parameter. The population size is set to 20, and the number of iterations is set to 100. The classification diagnosis results of the respective model are presented in Fig. 12.The detailed information of each model classification is listed in Table IX. TABLE IX DETAILED INFORMATION TABLE OF low temperature overheating 6/9 6/9 7/9 7/9 7/9 7/9 middle temperature overheating 7/10 9/10 8/10 9/10 9/10 9/10 high temperature overheating 8/10 9/10 9/10 9/10 9/10 9/10 partial discharge 7/9 9/9 8/9 9/9 9/9 9/9 low energy discharge 4/9 6/9 7/9 7/9 7/9 9/9 high energy discharge 15 Analysis of Fig. 12 and Table IX indicates that the DBSO-Catboost model has the optimal classification effect. After optimization by the optimization algorithm, the diagnostic effect of the classification model is improved. After the optimization algorithm, the classification effect of the respective model is significantly improved.
In order to verify the performance of the model, the PSO-RF model established by using the feature of non-coding ratio is compared with the model proposed in this paper [22]. The experimental results are shown in Fig.13. | 8,002.4 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
The Influence of social network models on e-commerce: a comparison between Wechat and Bilibili
. E-commerce are popular these days. While different social media may adopt different social network models, it actually made rules for how people are interacted with one-another on these platforms, and influences how people receive different messages about products. And different ways marketers engage with customers on social platforms influence the e-commerce situation or value of trade on platforms. This essay aims to evaluate the influence of social network models on e-commerce by making a comparison between WeChat and Bilibili, which are two famous social media in China. This essay will first explain some social network frames, and then will examine the operating mechanism of WeChat and Bilibili and how they are related to value of trade on these platforms. It is shown clearly that different ecommerce social networks lead to different results in ecommerce income. Wechat focus on 1 to 1 social mode and building ecommerce on the relationship between merchants and customers while Bilibili focus on building communities which eventually leads to community economy. This research result may help platform builders and merchants to choose their own strategies in ecommerce.
Research background and motivation
According to what Dr Edoardo Gallo mentioned in class, network means a set of entities connected by links, and social learning means you will learn from people you are connected to [1].Also, the way people are connected to each other would affect how some games spread.What can be cited to this research is what professor said about friendship and obesity.Direction and mutuality of friendship matters in obesity."If two people both say there are friends then obesity risk increases by 171%; If A says B is a friend but not the other way around, then risk of obesity for A increases by 57% but no change in B's risk" [2].This is highly connected to the comparison between different social apps and whether it's mutual for people to follow each other and other ways on social media do influence e-commerce.
Social media has already become an important part of business-to-business (B2B) e-commerce.It provides a platform for B2B buyers and sellers to communicate, collaborate, and exchange information [3].Another article also mentioned that "Social media allows people to freely interact with others and offers multiple ways for marketers to reach and engage with consumers" [4].
" Multiple ways", as is mentioned above, do influence the e-commerce situation or value of trade on platforms.For instance, to compare two different popular social apps in China, WeChat and Bilibili, both are famous platforms for marketers to sell their products.This article will discuss social network marketing on WeChat and Bilibili respectively in the following chapter.
Literature review
For ecommerce and social network mode, there are already many articles focusing on this area.Some articles find social ecommerce spreading from China to USA declared that Shopping on social media platforms is a central feature of e-commerce in China.Now, this new way of buying is also growing rapidly in the US [4].This is true.From the website Insider Intelligence, US ecommerce buyers remained growing from 45.8 to 101.1, from year 2017 to year 2023 as is shown in Figure 1 [5].As is pointed out by some study in this area from China, the underlying social network structure of social ecommerce satisfies the small-world, scale-free characteristics of complex networks, while the degree of network grouping and network modularity in the social network structure has a strong positive correlation with user adoption behaviour [6].Some research also used 4C theory to analyse the B2B ecommerce market which refers to Context, Community, Content and Communication [7].
Research Contents and Framework
The purpose of this paper is to assess the degree of impact enforced by social networking models on ecommerce by comparing WeChat and Bilibili, two prominent Chinese social media outlets.Using a comparative case study analysis, the paper will first explain what is meant by the social network framework and how it works, and then will examine the mechanisms by which WeChat and Bilibili operate, and how they relate to the value of trade on these platforms.
Social network marketing on WeChat
WeChat is the largest social network in China and is becoming more popular globally too.Businesses can use WeChat marketing to promote and advertise products or services to a target userbase.They can do this by running advertising campaigns on the app, promoting products and services on WeChat moments, and running miniprograms to drive conversions [8].This article will introduce WeChat moments and mini-programs.
WeChat moments is a social media platform within WeChat that allows its users to share photos, messages and videos with their friends and keep updating with them.It is similar to Facebook's news feed.Users can post updates and photos that are visible to their friends.Therefore, moments is a great way for businesses to promote their products or services by sharing updates and photos with their followers.Also, since people on Wechat have a mutual relationship instead of single direction following, the marketers can view their customers' moments as well.If people accept other people as their "friend", and if the consumers do not choose to block the marketers, the marketers may have a chance to glance at what the consumer has been interested about, and it's easy for them to send messages to consumers or post some relevant advertisements on products about customer's interests since almost all Chinese people need to check their WeChat everyday.
WeChat mini-programs can be regarded as subapplications within WeChat ecosystem.These miniprograms enable businessmen to provide their users with virtual store tour, task management and other services.They are essentially small applications that function within WeChat which operate like a separate mobile app, except that they are hosted and function within another app without the need for separate installation.And it's easy for business men to create their own mini-programs on WeChat as small shops to sell their products.
The process usually goes like this: (a) The people saw some advertisements published on the WeChat moments with their friends or acquaintances all comment behind discussing the product; (b) If the customer decided to purchase the goods, he or she may click the link or picture attached to the WeChat moment and would be transferred to a WeChat mini-program or another app Taobao, which is the Chinese version of Amazon to finish paying and fill out blank sheet with their address.
(c) And people usually pay by WeChat Pay or Alipay in China (in Figure 2).To make the transaction happen there is Wechat pay, which is a payment feature integrated into the WeChat app.It is a mobile payment solution that allows users to complete transactions quickly and easily using their smartphones.It has several features such as Quick Pay, QR Code Payment, In-App Web-based Payment, In-App Payment, Mini Program Payment, to fulfill different payment situations.Basically, people just store some money on their WeChat and confirm the payment.It is very convenient because you don't need a card, and it is more secure because you don't need to give out your security code and that would hinder online credit card skimming.
It should be noticed that social network played an important role in how the trade happens.Business men can tailor ads to customer preferences and increase the likelihood of people buying by allowing them to comment on the same ad.At the same time, the flow from interest generation to payment is very smooth within WeChat.All of this helps to develop online commerce by way of social networking.
What's more, even social warmness can be a product on WeChat.Coca-Cola launched a WeChat campaign that allowed users to send virtual Coke bottles to their friends, which generated over 125,000 bottle sends in just one month; Starbucks launched a WeChat campaign that allowed users to send virtual gift cards to their friends, which generated over 62 million yuan in sales; And Durex launched a WeChat campaign that allowed users to send virtual "condom" packets to their friends, which generated over 10 million yuan in sales.
In terms of business data, Tencent's social network revenue in 2021 is $17.4 billion, according to Business of Apps.This item accounts for 19% of the company's total revenue [9].In the first quarter of 2022, WeChat had 1.26 billion active users, 3.5 million mini-programs, and had a total transaction volume of RMB 2.7 trillion in the previous year [9].
In conclusion, social network e-commerce on WeChat was a huge success.
Sococial network marketing on Bilibili
Bilibili is a video-sharing platform that is based on usergenerated content.The platform's content was mainly centered around comics, games, and anime in the early stages, but expanded to various sub-sectors such as life, music, and reading in the later stages.For this reason, Bilibili has been dubbed the Chinese version of Youtube.The platform has been around since 2010 but has risen dramatically in the past few years, nearly doubling its monthly active users.from only about 50 million in 2017, by June 2019 it had exceeded 100 million.
According to Analysis of the Marketing Strategy of Bilibili and the Reasons for Its Success, the study results indicate that Bilibili has a competitive advantage in China's young-generation consumer market through a product strategy of community-based management, a low-price simplicity pricing strategy, and a vertical marketing channel strategy [10].The reasons for Bilibili's success are excellent marketing strategy, highly sticky user base, and community-based operation model.And community mode operation and sticky-user groups are highly connected with social networks.On Bilibili, the ups, which refers to people who uploads videos on this platform usually cannot see their followers daily sharing except that they follow each other.
So the original setting of WeChat is to follow each other while Bleeping is a one-way follow, but it can be changed afterwards.However, the original setting does represent the individuality of the different software business models.WeChat focus on person-to-person ecommerce, while Bilibili targets at groups.Since people following the same up is usually an expert in some fields like make-up or cooking, they would all be interested in some products related to this area.What's more, the ups also show some videos about their daily life, and it makes followers feel more real and reliable.People tend to trust those ups and by the products they recommend.And there is usually a blue link, which is a blue product link at the top of the comments section below the Bilibili video.By clicking at these links people can transfer to another app, Taobao with coupons.And that is called fan benefits.People can purchase goods at a lower price through those ups, and that would also increase people's love and trust for ups and increase use of the software Bilibili for shopping.
According to the analysis on Bilibili Marketing Strategy, Bilibili has created a unique community atmosphere with pop-up and comment interactions, anime and manga culture, and participation in video creation [11].Accordingly, Bilibili has gradually formed a highly viscous user group that is mainly composed of teenagers.It is vital to mention the bullet-screen interaction, which is some comments going through your screen while you are watching this video.If it's a video selling products, there would be many comments praising this product and claiming that someone had bought it and its function was amazing.So under this circumstance, people would be more likely to shop on impulse.
Bilibili currently has revenues of $3.23 billion.In 2021, the company's total revenue was $3.00 billion, up from $1.75 billion in 2020 [12].In 2022, Bilibili achieved total net revenue of approximately RMB 22 billion via four segments: mobile games, live streaming, advertising and e-commerce.As for e-commerce versus social networking, Bilibili's third-quarter revenue was RMB734 million, up 78% from the same period in 2020.E-commerce accounted for 14% of the company's revenue.
Conclusion
While WeChat and Bilibili use different methods to organize the social network system on their platforms, their aims are the same: to increase user stickiness and to encourage purchasing.This essay has illustrated different social network modes on these two platforms and how it affects trade, and have found that both trading methods built on different social network modes has its benefits and drawbacks, which may help platform builders to update their business operating mode and help merchants to choose the specific platform they may wish to sell their products on.
What is a pity is that this paper cannot find very specific data on each module in ecommerce on Wechat and Bilibili, with which this research can be put forward into more detail analysis.
At the same time, in terms of the choice of research methods, due to the limited access to resources, this paper only selects some simple data on social platforms for theoretical and analysis, and does not combine text Big data for quantitative analysis.In the future, social data of users on social media can be obtained for indepth analysis, so as to further enrich the research content of this paper.
Figure1.
Figure1.US Social Commerce Buyers Growth,2017-2023 [7].Note: ages 14+; social network users who have made at least one purchase via any social channel (eg.Facobook Marketplace, Instagram Checkout, WeChat Mini Programs.Line Shopping, VK Market), including links and transactions on the platform itself, during the calendar year, including online, mobile, and tablet purchases.
Figure 2
Figure 2 The process (Picture credit: Original) Therefore, the author would also introduce WeChat Pay in brief. | 3,075.8 | 2024-01-01T00:00:00.000 | [
"Business",
"Computer Science"
] |
Development of multistage energy recovery system for gyrotrons
A four-stage depressed collector based on spatial separation of electrons with different energies in the crossed electric and magnetic fields was developed for the experimental SPbPU gyrotron. Modeling of the system of electron energy recovery and analysis of the distributions of electric and magnetic fields in the gyrotron collector region were performed. As a result of the theoretical estimations and the trajectory analysis of the helical electron beam, it is shown that the developed system provides recovery of the residual electron energy necessary to achieve the total efficiency of the gyrotron exceeding 70 %.
Introduction
Modern gyrotrons are microwave sources generating high output power in millimeter and submillimeter wavelength ranges, which substantially exceeds capability of conventional vacuum microwave devices such as traveling-wave tubes, magnetrons, klystrons, etc. [1]. Gyrotrons have already become highly required tools for electron current drive and plasma heating in controlled fusion experiments. Gyrotrons are also used for particle acceleration, in high-resolution spectroscopy, for material processing and for other applications [1,2].
The gyrotron is a cyclotron resonance maser that uses the energy of helical electron beam (HEB) concentrated in transverse motion of electrons to generate high-frequency electromagnetic radiation. The electron efficiency of gyrotrons determined by the electron energy transferred to high-frequency radiation does not usually exceed 30-35 % [2]. The total efficiency can be increased by a system of energy recovery placed in the collector region in which electrons of the spent beam are decelerated. Deceleration of electrons allows to decrease beam power deposited in the collector, in other words, to return a part of the beam power back to the electric circuit and to diminish the collector thermal loading.
Presently, high power gyrotrons are usually equipped with one-stage energy recovery systems that allow increasing the total efficiency up to 50-55 % [1][2][3]. The further increase of total efficiency is possible with implementation of multistage systems of energy recovery. Such systems have to separate the electron beam on fractions with different energy of electrons and provide deposition of these fractions on sections under different potentials [4][5][6][7][8][9]. The magnetic induction in the collector region of gyrotrons is noticeably less than in the resonator. As a result, major part of electron energy is concentrated in the longitudinal (along magnetic field lines) motion, which can simplify implementation of multistage energy recovery. Increase of the number of deceleration stages leads to enhancement of the total efficiency. However, as we know, systems with multistage energy recovery have not yet been implemented in gyrotrons, possibly due to the inherent velocity and position spreads of electrons in the HEB, as well as due to the presence of a residual magnetic field in the collector region.
In order to achieve recovery of electron energy in a multistage depressed collector, a method of separation of electrons with different energy should be realized. A new approach to separation of electrons, based on their radial drift in the crossed electric and magnetic fields, was studied by several research groups (for example, [5][6][7][8][9]). The authors of this paper proposed to use the axial electric field and the azimuthal magnetic field with the aim of providing effective separation of electrons.
In this paper, the design of a 4-stage energy recovery system for the pulsed 74.2 GHz, 100 kW gyrotron [10] is discussed. The main criteria for selecting parameters of the collector electrodes and coils to achieve effective recovery of residual electron energy of the spent beam are determined. The analysis of field distributions in the developed recovery system allows to define limitations in the capabilities of this system and to suggest possible ways to reduce these limitations. In conclusion, the results of electron trajectory analysis of the spent beam are presented. These results show the possibility to achieve record values of total efficiency of the gyrotron.
Method of electron separation
The separation of electrons in the designed multistage recovery system results from introduction of the azimuthal component of magnetic field B in addition to the axial magnetic component B z confining the beam and the retarding axial electric field E z . The mechanism of separation is based on the radial drift of electrons in the crossed axial electric and azimuthal magnetic fields. Fig. 1 shows schematically the principle of the spatial separation. The trajectories of electrons with different initial energy W i are presented. Electrodes (sections) under different potentials create the electric field, and magnetic field is produced by coils hidden in the scheme. The distribution of electric field along the direction of electron motion can be adjusted by changing the tilt angle of the sections. The velocity of radial drift v dr = E z /B depends only on magnetic and electric field amplitudes and does not depend on energy of particles. Therefore, the distance of radial drift will be determined by the transit time of electrons moving in the region of crossed fields, which is defined by the initial energy W i (Fig. 1). With proper selection of the electric and magnetic field amplitude, we can provide spatial separation of electrons with different energy resulting in deposition of beam energy fractions on the sections under different potentials φ i . The azimuthal and axial drifts caused by the combinations of fields E r ×B z and E r ×B also exist but do not essentially affect the separation of electrons in this configuration of electrodes.
During motion of an electron in retarding electric field E z , it can change the direction of its axial velocity to opposite direction. In the presence of a confining magnetic field B z , such reflected electrons move adiabatically and can exit the collector region and reach the resonator if there is no enough radial drift to intercept them by one of the collector sections. In the resonator, these electrons can interact with high-frequency field and take energy from it decreasing output power of the gyrotron. Therefore, reflection of electrons should be reduced to the limit value of at least 1-2 % that can be acceptable for gyrotrons [11].
Meaning basic principle of the separation, it is possible to determine the requirements for the sources of electric and magnetic fields, which allow to realize an effective multistage energy recovery of the spent beam. First, the distance of radial drift of electrons during the movement in retarding field until they are deposited on one of the electrodes should significantly exceed the thickness of the hollow HEB. In this case, deposition of electrons occur without reflection to the resonator. Secondly, the amplitude of the electric and magnetic fields should vary slightly along the axial coordinate z in the region of electron deceleration. It is important for providing an acceptable length of the collector system at approximately equal thermal loading on each collector section. Thirdly, the magnetic and electric fields should change adiabatically in the transition region between the resonator and the collector. In the opposite case, electrons can acquire an additional transverse velocity, which increases probability of their reflection in the direction of the resonator.
The development of a 4-stage collector system for the SPbPU gyrotron
The modeling of the collector system was performed for a pulsed gyrotron of average power of ~ 100 kW. The main parameters of the gyrotron are shown in Table 1. Previously, this gyrotron was used for complex experimental study aimed at finding methods to enhance the quality of the HEB and, as a result, the efficiency of the device [10,12,13]. In this gyrotron, a triode-type magnetron injection gun (MIG) forms the electron beam. The magnetic system consists of solenoids which are powered by a capacitive storage bank operating in a single pulse regime.
Elements of the collector system were designed to provide multi-stage recovery of residual beam energy based on the requirements given in the Section 2. The threedimensional drawing of the gyrotron collector model is shown in Fig. 2. Modeling of the collector system and calculation of electron trajectories were performed using the simulation code CST Studio Suite.
In the collector region, a series of Helmholtz coils is used for confinement of the spent electron beam. In combination with the coils of the main magnetic system of the gyrotron, these coils create a quasi-homogeneous distribution of the magnetic field B z along the axial coordinate in the region of deceleration of electrons. The azimuthal magnetic field B is generated by a solenoid with toroidal winding. Wires of the toroidal solenoid are grouped together and form two "wisps" before the entrance of the collector to provide access of electrons to the region of energy recovery (Fig. 2). Outer winding of the toroidal coil can be made using wires with an increased cross sectional area to improve the uniformity of the azimuthal field distribution along the azimuthal coordinate.
Four conical sections I-IV are used for creation of electric field in the energy recovery region. The electric potentials of sections decrease in direction from the resonator. The geometric parameters of the sections were chosen based on requirements of the section 2 and on estimations of drift radial distance of electrons with different energies at given values of E z and B . Optimization of the section geometry was carried out according to the results of the trajectory analysis [14]. Conical form of the sections provides the deposition of the major part of particles on the their outer walls [14]. By changing the angle of electrode inclination, reduction of the incident angle is provided for primary electrons, which further reduces the thermal loading of the collector. Secondary electrons emitted from the collector can have a negative effect on the operation of the device, if there is a possibility of their moving toward the resonator. However, due to the presence of the crossed E z ×B fields and the conical shape of sections, such a possibility is practically excluded, which is one of the advantages of the considered spatial separation method [8]. Fig. 3 shows the distribution of the magnetic field components in the collector region. The image of the collector model in the cross-section of the "wisps" is shown in Fig. 3, b. Current in the "wisp" wires creates an additional axial magnetic field B' z in its vicinity. This field has both the positive (B' z > 0) and the negative (B' z < 0) direction with respect to the direction of the field B z created by the magnetic system of the gyrotron and by Helmholtz coils. Fig. 3, a shows the axial and azimuthal components of the total magnetic field as functions of the axial coordinate. The values of B z , B' z , B for each z were determined at the radial coordinate r corresponding to the average radius of the HEB calculated in the absence of magnetic field of the toroidal coil. Electrons that enter the separation area in the region of "+" angles ( Fig. 3, a), can move at too small radii, and their drift distance can not be enough to deposit on one of the sections. Such electrons increase the coefficient of reflection from the collector. If electrons enter the collector in the region of "-" angles, the total axial magnetic field along their trajectory can change the direction to opposite. Such a reversal of magnetic field leads to a noticeable change in electron transverse velocity and also to an increase of reflection of electrons.
To reduce the disturbing effect of the magnetic field of the "wisps", a sectioned electron beam was used in the calculations. This beam was formed by a sectioned cathode included two sectors with no emission, the azimuthal positions of which corresponded to the azimuthal location of the "wisps" [14]. Based on the results of the trajectory analysis, optimal length of these sectors in the azimuthal direction was chosen to be 70°. With the help of the sectioned electron beam, the percentage of electrons reflected from the collector was significantly reduced compared with the uniform beam. In the separation region (z > 150 mm), the longitudinal component of magnetic field induction B z varies slightly along the z coordinate and is equal to approximately 0.05 T.
The trajectory analysis [8] showed that the average radius of the beam R av ≈ 55 mm and the beam thickness R ≈ 3 mm at B z = 0.05 T. In the recovery region, the value B is equal to approximately 0.08 T (Fig. 3, a). Analytic estimations obtained from solving the equations of motion of electrons in these fields show that the drift distance at typical values of electron initial energy is significantly higher than R. For example, if B z = 0.05 T, B = 0.08 T and E z = 1 kV/cm, the trajectory of an electron up to time of its reflection shifts radially on the distance R dr from 7 to 14 mm when changing the initial energy from 8 to 38 keV. Note that, since the main part of electrons is deposited on the outer walls of the collector sections, the radial drift distance exceeds R dr .
In order to determine the characteristics of the spent beam, calculations of electron trajectories in the electron optical system of the gyrotron and then calculations of the interaction of HEB with high-frequency field in the resonator were carried out [14]. A sectioned HEB was used in these calculations. A control electrode was included in the MIG, which gave the possibility to regulate the distribution of electric field in cathode region [14,15]. The following calculations were performed for the regime of the gyrotron characterized by the high quality HEB with the low velocity spread v = 3.4 % and with the average pitch ratio α = v / v = 1.52. In this regime, the calculated values of the output microwave power P RF and electronic efficiency of the gyrotron el were 138 kW and 46 % respectively. The data of the spent HEB in the output port of the resonator consisting of about 25•10 3 particles [14] were served as input data for the calculation of the electron trajectories in the collector.
As a result of the optimization, the following values of potentials of the collector sections were selected: = -7.72 kV, = -10.72 kV, = -14.72 kV, IV = -24.72 kV. These potentials are specified in relation to the grounded collector body. The distribution of the potential in the collector region at these potentials of the sections is shown in Fig. 4. To provide a quasi-uniform longitudinal electric field in the recovery region (z > 150 mm), the collector sections were equipped with the cylindrical elements which shielded the working space from the grounded collector.
The trajectory analysis of the collector with optimized geometry of sections and mentioned values of potentials I-IV allowed to obtain the value of the power dissipated on the collector P diss equal to 54.19 kW at the current of reflected electrons equal to 1.37 % of the total beam current I b = 10 A. In the considered regime of the gyrotron, the total efficiency of the device was achieved to be 71.8 % at the recovery efficiency of 66.5%.
Conclusion
The design of a 4-stage energy recovery system based on the method of spatial separation of electrons in the crossed azimuthal magnetic and axial electric fields was developed. The numerical simulation showed that the practical implementation of multistage collectors is possible for pulsed gyrotrons such as the experimental SPbPU gyrotron. Trajectory analysis shows the possibility of achieving the necessary spatial separation of electrons with different energies in the presence of initial radial coordinate spread and electron velocity spread. Drawbacks of the proposed collector system are mainly related to the local inhomogeneity of the magnetic field created by the "wisps" of the toroidal solenoid. A possible solution to reduce the negative effect of this inhomogeneity on recovery efficiency can be the use of a sectioned cathode. In the optimized regime of the gyrotron, the total efficiency of 71.8% was achieved with the recovery efficiency of 66.5% and the reflection coefficient of 1.37% of the total current of the beam.
Using the simulation data, a 4-stage collector system for the SPbPU gyrotron was designed and manufactured. The first experimental results showed the possibility of achieving the total efficiency of 60 % in a single-stage regime, which allows us to hope for the successful implementation of a multi-stage recovery scheme. The possibilities of further improvement of the developed method of recovery are connected with the improvement of the design of magnetic system providing the required distribution of the azimuthal magnetic field. | 3,886.4 | 2019-01-01T00:00:00.000 | [
"Physics"
] |
Circulating Tryptase as a Marker for Subclinical Atherosclerosis in Obese Subjects
Introduction Mast cells participate in atherogenesis by releasing cytokines to induce vascular cell protease expression. Tryptase is expressed highly in human atherosclerotic lesions and the inhibition of tryptase activity hampers its capacity to maintain cholesterol inside macrophague foam cells. We aimed to investigate the association between circulating tryptase levels and subclinical atherosclerosis through estimation of carotid intima-media thickness (c-IMT) as surrogate marker for increased cardiovascular risk in obese and non-obese subjects. Methods Circulating tryptase levels (ELISA) and metabolic parameters were analyzed in 228 subjects. Atherosclerosis (c-IMT>0.9 mm) was evaluated ultrasonographically. Results Significant positive associations were evident between circulating tryptase levels and BMI, fat mass, glycated haemoglobin, fasting insulin, HOMAIR, fasting triglycerides and ultrasensitive PCR (p<0.05 from linear-trend ANOVA). The positive association between tryptase levels and insulin resistance parameters, suggested a glucose homeostasis impairment in individuals with higher tryptase levels. The negative asociation between tryptase levels and HDL-cholesterol supports the proatherogenic role of this protease (p<0.0001). Circulating tryptase levels were strongly associated with c-IMT measurements (p<0.0001 from linear-trend ANOVA), and were higher in subjects with presence of carotid plaque (p<0.0001). Tryptase levels (beta = 0.015, p = 0.001) contributed independently to subclinical atherosclerosis variance after controlling for cardiovascular risk factors (BMI, blood pressure, LDL-cholesterol). Conclusions Circulating tryptase level is associated to obesity related parameters and has a close relation with various metabolic risk factors. Moreover, serum tryptase level was independently associated with c-IMT, suggesting its potential use as a surrogate marker for subclinical atherosclerosis in obese subjects.
Introduction
Atherosclerosis is a chronic inflammatory disease that is the main cause of cardiovascular morbidity and mortality all over the world. It is characterized by the progressive accumulation of cholesterol in the intimal layer of arterial walls of large-and medium-sized arteries, leading to the formation of plaques and vascular obstruction [1,2]. Inflammatory cells such as lymphocytes, macrophages, neutrophils, and mast cells are involved in the pathogenesis of atherosclerotic plaque rupture, as they cause the fibrous plaque to weaken because of the enzyme activity of the leukocytes that degrade the extracellular matrix [3]. Mast cells are derived from pluripotent hematopoietic stem cells, which are released in the blood flow, and then migrate to the tissue where they proliferate, differentiate, and become resident [4,5]. Two types of mast cells, differing in neutral (cytoplasmic) proteases, are identified: mast cells that contain tryptase and mast cells that contain tryptase and chymase [6]. Importantly, these proteases were found in the human arterial intima twenty years ago, both normal and atherosclerotic [7].
Tryptase is a trypsin-like serine proteinase which has been estimated to constitute approximately 20% of the total cellular protein of human mast cells [8]. This is stored fully active in the cytoplasmic granules of all human mast cells and is released in the peripheral circulation [9]. The physiologic role of tryptase is still uncertain, however the activity of the enzyme is only observed in damaged tissues such as those of people with atherosclerosis [10]. Also, mast cells are locally activated and release tryptase into their microenvironment, where active tryptase can act on the various extracellular targets, i.e. activate pro-MMPs, and degrade lipoproteins and fibronectin [11]. Mast cell activation in atherosclerosis has been demonstrated to promote intraplaque haemor-rhage resulting in plaque progression and destabilisation. In this sense, several studies have established an association of blood tryptase levels with atherosclerotic plaque instability [12,13]. A recent paper performed in ApoE-/-mice, describes the role of tryptase in atherosclerotic progression and intraplaque hemorrhage [14]. Indeed, tryptase activates pro-metalloproteinases and chemokines, and degrades lipoproteins and fibronectin [15,16]. In a study of aortas sections obtained from autopsies has been shown that the degree of macroscopic lesion of atherosclerosis increased proportionally with the increase in the density of mast cell chymase and tryptase [17]. In this sense, in a recent paper mast cells are associated with plaque microvessel density [18]. Evidence derived from human data supports an association of mast cells and obesity since obese subjects had higher serum tryptase levels and an increased number of mast cells stained with tryptase than lean individuals in white adipose tissue [19][20][21]. In the same setting, tryptase has also been associated with older age, fasting glucose, total-and LDL-cholesterol and fasting triglycerides [12].
In recent years, there has been growing interest in identifying asymptomatic individuals with increased cardiovascular risk who may benefit from specific primary prevention. It is well-known that increased carotid intima-media thickness (c-IMT), measured by Bmode ultrasonography, constitutes an independent risk marker for coronary artery disease and stroke, becoming a sensitive subclinical atherosclerosis marker [22]. The purpose of our study was to investigate the association between circulating tryptase levels and subclinical atherosclerosis through estimation of c-IMT as a surrogate marker for increased cardiovascular risk.
Subjects
From January 2010 to February 2012, we consecutively recruited 228 subjects from the ongoing multicenter FLOR-INASH Project, undertaken to evaluate the role of intestinal microflora in adults with NAFLD (non-alcoholic fatty liver disease). Inclusion criteria were age 30 to 65 years, and ability to understand study procedures. Exclusion criteria were systemic disease, infection in the previous month, serious chronic illness, .20 g ethanol intake per day, or use of medications that might interfere with insulin action. 19 subjects were taking statins. No significant differences in tryptase levels were seen according to statin treatment. 72.6% of the population was non-smoker. Circulating tryptase levels were higher among smokers (13.6766.7 vs 11.5165.9 p = 0.022). All subjects gave written informed consent, validated and approved by the ethical committee of the Hospital Universitari Dr. Josep Trueta (Comitè d'È tica d'Investigació Clínica, CEIC), after the purpose of the study was explained to them. Ethical committee of the Hospital Universitari Dr. Josep Trueta specifically approved this study (ethical approval number 2009046).
Analytical methods
Each patient underwent anthropometric measurements, vascular ultrasound and laboratory parameters on the same day. After 8 h fasting, blood was obtained for measurement of plasma lipids, glucose, and insulin. Serum glucose concentrations were measured in duplicate by the glucose oxidase method using a Beckman glucose analyser II (Beckman Instruments, Brea, California). Intraassay and interassay coefficients of variation were less than 4% for all these tests. We used a Roche Hitachi Cobas c 711 instrument to do the determinations. Total serum cholesterol was measured by an enzymatic, colorimetric method through the cholesterol esterase / cholesterol oxidase / peroxidase reaction (Cobas CHOL2). HDL cholesterol was quantified by an homogeneous enzymatic colorimetric assay through the cholesterol esterase / cholesterol oxidase / peroxidase reaction (Cobas HDLC3). Total serum triglycerides were measured by an enzymatic, colorimetric method with glycerol phosphate oxidase and peroxidase (Cobas TRIGL). LDL cholesterol was calculated using the Friedewald formula. Glycated haemoglobin (HbA1c) was measured by high-pressure liquid chromatography with the use of a fully automated glycosylated hemoglobin analyzer system (Hitachi L-9100). C-reactive protein (ultrasensitive assay; Beckman, Fullerton, CA) was determined by a routine laboratory test, with intra-and interassay coefficients of variation ,4%. The lower limit of detection is 0.02 mg/l.
Serum insulin was measured in duplicate in the same centralized laboratory by a monoclonal immunoradiometric assay (Medgenix Diagnostics, Fleunes, Belgium). The intra-assay CV was 5.2% at a concentration of 10 mU/l and 3.4% at 130 mU/l. The inter-assay CVs were 6.9 and 4.5% at 14 and 89 mU/l, respectively. Insulin resistance was determined by the homeostasis model assessment of insulin resistance (HOMA IR ).
Body composition
Fat mass was determinated by dual energy x-ray absorptiometry (DEXA), using a Lunar Prodigy Full Oracle (GE Healthcare, enCore software version 13.2). Whole body composition (fat mass, fat-free soft tissue mass) was obtanined according to standard procedures, by trained personnel. Body fat composition was also estimated in those subjects by Bio-electrical impedance analysis (BC-418, Tanita Corporation of America, Illinois, USA). Obesity was defined as BMI .30 kg/m 2 .
Ultrasound evaluation
We used a Siemens Acuson S2000 (Mochida Siemens Medical System, Tokyo, Japan) ultrasound system with a 3.5 MHz convex transducer to scan the liver and a 7.5 mHz linear array transducer to scan carotid arteries. Images were transferred to Starviewer software, developed in our laboratory (http://gilab.udg.edu), and independently evaluated by two radiologists blinded to clinical and laboratory data. Carotid arteries were evaluated according to the Mannheim Consensus [23]. c-IMT values were manually measured in the far wall of each common carotid artery in two locations a) in a proximal segment and b) in a plaque-free segment 10 mm from the bifurcation. Measurements were performed by two different observers. Pearson's correlation for c-IMT was 0.75. The mean c-IMT value for each subject was calculated from these four measurements. Values .0.90 mm were considered pathologically increased. Plaque was defined as a focal structure of the inner vessel wall of at least 0.5 mm or 50% of the surrounding IMT value, as well as demonstrates a thickness .1.5 mm as measured from the media-adventitia interface to the intima lumen interface any IMT measurement .1.5 mm. [23].
Statistical analysis
Statistical analyses were performed using SPSS 12.0 software for Windows (SPSS, Chicago, IL, USA). Pearson correlation was used to determine agreement on c-IMT. Results are expressed as means 6 standard deviation for continuous variables. Parameters that did not fullfil normal distribution were mathematically transformed to improve symmetry for subsequent analyses. One-way ANOVA with Bonferroni correction as the post-hoc test, were used to seek differences in clinical variables among groups. We used Student's t-test to determine differences in quantitative variables. The relation between variables was tested using Pearson's test and stepwise multiple linear regression analysis. The general linear model was also used to identify independent predictors of atherosclerosis after adjusting for cardiovascular risk factors (BMI, blood pressure or LDL-cholesterol). Receiver operating characteristic (ROC) curve analysis was used to determine the diagnostic potential. Statistical significance was set at p,0.05.
Characteristics of the study participants
To study whether circulating tryptase levels are associated with metabolic parameters in humans, the study subjects were stratified according to tryptase quartiles. Significant positive associations were evident between circulating tryptase levels and BMI, fat mass, glycated haemoglobin, fasting insulin, HOMAIR, fasting triglycerides and ultrasensitive PCR. On the contrary, negative associations with HDL-cholesterol were observed (p,0.05 for linear-trend ANOVA for comparisons across tryptase quartiles; Table 1).
We next explored the association between circulating tryptase levels and C-IMT. As shown in figure 1, C-IMT was significantly higher in the highest quartile than in the middle or lowest circulating tryptase quartiles (p,0.05 for linear-trend ANOVA for comparisons across tryptase quartiles). Moreover, when the presence or absence of carotid plaque was evaluated, subjects with carotid plaque presence shown higher circulating tryptase level than carotid plaque absence (p,0.0001) (figure 2). Importantly, the area under the curve for circulating tryptase to predict atherosclerosis was 0.653 (0.532-0.774) in both gender combined ( figure 3).
In multiple linear regression models, BMI (beta = 0.310, p = 0.001) contributed independently to circulating tryptase levels variance after controlling for age, gender and smoking. Moreover, circulating tryptase levels (beta = 0.219, p = 0.002) contributed independently to subclinical atherosclerosis variance after controlling for several cardiovascular risk factors (BMI, blood pressure, LDL-cholesterol or smoking).
Discussion
The present study confirmed that serum tryptase level has a positive correlation with obesity and insulin resistance related parameters and with and adverse lipid profile. Most importantly, we demonstrated for the first time that circulating tryptase levels were positively associated with subclinical atherosclerosis, as represented by the c-IMT. Moreover, circulating tryptase level was an independent determining factor for the subclinical atherosclerosis, even after adjusting for other cardiovascular risk factors. Our data are in line with a previous paper reported mast cells distribution density in carotid samples was associated with an atherogenic lipid profile and high-grade of carotid artery stenosis [24].
Although tryptase is a well-known protease with an established role in immune process, recent data indicate that might provide a link between obesity and chronic inflammation. On the one hand, growing evidence derived from human data supports the link among tryptase and obesity. In the current study, there is a trend toward between circulating tryptase levels and obesity related parameters such as BMI and fat mass. These findings are in line with other authors who reported higher tryptase levels in obese subjects [18,19]. Moreover, an increased number of mast cells stained with tryptase have been also demonstrated in scWAT of obese patients, [20] suggesting that mast cell accumulation contributes to adipose tissue inflammation and alteration of glycemic status in obese subjects. A positive association between circulating tryptase levels and insulin resistance parameters (HbA1c, insulin and HOMA IR ) have been also found in our study, suggesting a glucose homeostasis impairment in those individuals with higher tryptase levels. In line with our results, two Chinese studies, have considered circulating tryptase levels as an independent risk factor of pre-diabetes and diabetes mellitus [25,26], reinforcing the close association between the immune system, obesity, and vascular diseases.
On the other hand, epidemiology, histopathology and experimental studies point toward a proatherogenic role for mast cells, involving tryptase [27]. According to the cholesterol balance theory of atherogenesis, atherosclerosis is a cholesterol storage disease of the arterial intima in which cholesterol accumulation results from an imbalance between cholesterol influx and efflux [28]. HDL 3 particles efficiently remove cholesterol from foam cells. At this point we should highlight the pre-betaHDLdegrading effect of tryptase previously reported [29]. In our study we have found an inverse association between circulating tryptase and HDL-cholesterol levels. Tryptase mesurements have been performed in steady state conditions when the chief isoform is alpha-tryptase, responsible for degradation of the apolipoprotein A-I of prebeta HDL particles [30,31]. However, since pre-beta form of HDL is a very small fraction of the HDL-class in the circulation and the degrading effect is likely to occur in tissues but not in the circulation, serum tryptase is unlikely to contribute significantly to low HDL-cholesterol levels observed herein. Indeed, the main contributor to the HDL-cholesterol levels variation is BMI (beta = 20.546; p = ,0.0001) but not tryptase levels (beta = 20.043; p = 0.468).
In the light of our data it could be possible that adipose-derived tryptase may be involved in the pathogenesis of obesity-related inflammatory disorders, including atherosclerosis. However, the prevailing concept is that the tryptase acts in the tissue in which it is secreted but not in remote tissues into which a small fraction of the circulating tryptase may be transported across the endothelial barrier [32]. There are many questions emerged from this study, whether the adipose tissue is a critical source of tryptase levels and if it could act on atherosclerotic plaques or what the tryptase activity originating from adipose tissue is. Further studies will be needed to test the tryptase activity derived from adipose tissue.
Indeed, tryptase levels of our study are quite high compared to the reference range in healthy population. However, tryptase basal levels are associated with BMI. In agreement with our data, Liu et al. have reported similar tryptase levels in their lean vs obese population [19], suggesting that baseline levels in obese subjects are higher than the reference range in healthy population Regarding the influence of circulating tryptase on the atherosclerotic process, few studies have addressed this question. In an autopsy cases study, the degree of atherosclerosis was positively correlated with the expression of local tryptase in the atherosclerotic plaques [16]. In this sense, the main finding of our crosssectional study is that circulating tryptase levels are stongly associated with c-IMT measurements, being the highest tryptase levels in patients with carotid plaque. A thickened c-IMT does not immediately lead to cardiovascular events, but reflects the degree of atherosclerosis elsewhere in the arterial system [33]. Moreover our results suggests that circulating tryptase could be a useful marker as a predictor of cardiovascular disease.
In conclusion, the present study confirmed that circulating tryptase level is significantly elevated in obese individuals and is associated with various metabolic risk factors. Moreover, we demonstrated that the serum tryptase levels were independently associated with c-IMT and its potential use as a marker for subclinical atherosclerosis. Further experimental studies are warranted to clarify the role of tryptase in the atherosclerotic process. | 3,725.4 | 2014-05-15T00:00:00.000 | [
"Biology",
"Medicine"
] |
Impact of vertical stratification of inherent optical properties on radiative transfer in a plane-parallel turbid medium
The atmosphere is often divided into several homogeneous layers in simulations of radiative transfer in plane-parallel media. This artificial stratification introduces discontinuities in the vertical distribution of the inherent optical properties at boundaries between layers, which result in discontinuous radiances and irradiances at layer interfaces, which lead to errors in the radiative transfer simulations. To investigate the effect of the vertical discontinuity of the atmosphere on radiative transfer simulations, a simple two layer model with only aerosols and molecules and no gas absorption is used. The results show that errors larger than 10% for radiances and several percent for irradiances could be introduced if the atmosphere is not layered properly. ©2009 Optical Society of America OCIS codes: (010.1310) Atmospheric scattering; (010.5620) Radiative transfer References and links 1. K. Stamnes, S. C. Tsay, W. Wiscombe, and K. Jayaweera, “Numerically stable algorithm for discrete ordinate method radiative transfer in multiple scattering and emitting layered media,” Appl. Opt. 27(12), 2502–2509 (1988). 2. A. Berk, L. S. Bernstein, and D. C. Robertson, “MODTRAN: A moderate resolution model for LOWTRAN 7,” GL-TR-89–0122, Phillips Laboratory, ADA214337 (1989). 3. K. F. Evans, and G. L. Stephens, “A new polarized atmospheric radiative transfer model,” J. Quant. Spectrosc. Radiat. Transf. 46(5), 413–423 (1991). 4. E. F. Vermote, D. Tanre, J. L. Deuze, M. Herman, and J.-J. Morcette, “Second simulation of the satellite signal in the solar spectrum, 6S: an overview,” IEEE Trans. Geosci. Rem. Sens. 35(3), 675–686 (1997). 5. Q. Min, and M. Duan, “A successive order of scattering model for solving vector radiative transfer in the atmosphere,” J. Quant. Spectrosc. Radiat. Transf. 87(3-4), 243–259 (2004). 6. M. Duan, and Q. Min, “A polarized radiative transfer model based on successive order of scattering method,” Adv. Atmos. Sci. Doi: 10.1007/s00376-009-9049-8 (in print). 7. G. E. Thomas, and K. Stamnes, Radiative transfer in the atmosphere and ocean. (Cambridge, 1999), P160. 8. J. Xie, and X. Xia, “Long-term trend in aerosol optical depth from 1980 to 2001 in north China,” Particuology 6(2), 106–111 (2008). 9. P. M. Teillet, “Rayleigh optical depth comparisons from various sources,” Appl. Opt. 29(13), 1897–1900 (1990). 10. L. Harrison, and J. Michalsky, “Objective algorithms for the retrieval of optical depths from ground-based measurements,” Appl. Opt. 33(22), 5126–5132 (1994). 11. L. Harrison, J. Michalsky, and J. Berndt, “Automated multifilter rotating shadow-band radiometer: an instrument for optical depth and radiation measurements,” Appl. Opt. 33(22), 5118–5125 (1994). 12. Q. Min, and L. Harrison, “Cloud Properties Derived From Surface MFRSR Measurements and Comparison With GOES Results at the ARM SGP Site,” Geophys. Res. Lett. 23(13), 1641 (1996). 13. Q. Min, E. Joseph, and M. Duan, “Retrievals of thin cloud optical depth from a multifilter rotating shadowband radiometer,” J. Geophys. Res. 109(D2), D02201 (2004), doi:10.1029/2003JD003964. 14. M. Alexandrov, A. Marshak, B. Cairns, A. A. Lacis, and B. E. Carlson, “Automated cloud screening algorithm for MFRSR data,” Geophys. Res. Lett. 31(4), L04118 (2004), doi:10.1029/2003GL019105. 15. M. Alexandrov, A. Lacis, B. Carlson, and B. Cairns, “Remote sensing of atmospheric aerosols and trace gases by means of multifilter rotating shadowband radiometer. Part I: Retrieval algorithm,” J. Atmos. Sci. 59(3), 524–543 (2002). 16. H. R. Gordon, and D. J. Castaño, “Coastal zone color scanner atmospheric effects and its application algorithm: multiple scattering effects,” Appl. Opt. 26(11), 2111–2122 (1987). #121457 $15.00 USD Received 14 Dec 2009; revised 19 Feb 2010; accepted 1 Mar 2010; published 4 Mar 2010 (C) 2010 OSA 15 March 2010 / Vol. 18, No. 6 / OPTICS EXPRESS 5629 17. M. Wang, and H. R. Gordon, “Retrieval of the columnar aerosol phase function and single-scatting albedo from sky radiance over the ocean: simulations,” Appl. Opt. 32(24), 4598–4609 (1993). 18. F. Kuik, J. F. De Haan, and J. W. Hovenier, “Benchmark results for single scattering by spheroids,” J. Quant. Spectrosc. Radiat. Transf. 47(6), 477–489 (1992). 19. M. Duan, Q. Min, and J. Li, “A fast radiative transfer model for simulating high-resolution absorption bands,” J. Geophys. Res. 110(D15), D15201 (2005), doi:10.1029/2004JD005590.
Introduction
The radiative transfer (RT) equation is an integro-differential equation.Even in onedimensional plane-parallel cases, it is usually solved by numerical approximation [1][2][3][4][5][6].In most numerical RT models, the atmosphere is often divided into several layers.Each layer is assumed to be homogenous but the inherent optical properties (IOPs) are allowed to vary from layer to layer in order to resolve the vertical variation in the IOPs.This plane-parallel configuration artificially introduces vertical discontinuities of the atmospheric IOPs at layer boundaries that lead to an unphysical discontinuity in the radiation field.In this study, we will investigate the effects of the vertical discontinuities on simulations of radiances and irradiances.
Derivation of the discontinuity and its underlying physics
The radiative transfer equation for a plane-parallel scattering and absorbing medium can be expressed as [1]: Here µ is the cosine of zenith angle (Fig. 1), positive for downward and negative for upward directions, φ is azimuth angle, τ is the optical depth, I is intensity (radiance) of the radiation field, and J is source function given by: 0 / 0 0 0 The first and second terms on the right side of Eq. ( 2) are source functions due to single scattering and multiple scattering, respectively.ω is the single scattering albedo, P is the scattering phase function due to single scattering, πF 0 is the solar irradiance at the top of the atmosphere (TOA), and µ 0 and φ 0 are the cosine of the solar zenith angle and solar azimuth angle, respectively.The optical depth, τ, is given through integration of extinction coefficient β: Formal solutions of Eq. ( 1) for radiances in the downward and upward directions at optical depth τ can be written as: where b is the total optical thickness of the atmosphere, the symbols ↑ and ↓ denote the upward and downward radiance, respectively.In the Eqs.( 4) and ( 5), the values of µ are always positive, that isµ = |µ|>0.Physically, the radiance I ↑ and I ↓ for the horizontal direction (µ = 0) at τ should have exactly the same value if the atmospheric optical parameters vary continuously with optical depth.Therefore, the following equality must be satisfied: = (6) However, since the whole atmosphere is often divided into many layers and each layer is assumed to be homogeneous, the IOPs, such as single scattering albedo ω and phase function P, differ between neighboring layers.Thus, the IOPs are discontinuities across layer boundaries.In this situation, Eq. ( 6) may not be satisfied.To provide direct insight into the consequence of this artificial discontinuity, a two-layer model (Fig. 1) is used to simulate the radiances and irradiances.The single scattering albedo and scattering phase function are set to be ω x , and P x for layer x, where x = 1, 2 (for details, see section 3).
Single Scattering Approximation
For the single scattering case, the source function J in Eqs. ( 4) and ( 5) are replaced by: Because both layers are homogeneous, insertion of Eq. ( 7) into (4) and ( 5), leads to the following expressions for the single scattering radiance: For µ = 0, the single scattering radiance in the horizontal direction becomes: ( , 0, ) ( , 0, ) ) Therefore, the downward radiance in Eq. ( 8) and the upward radiance in Eq. ( 9) give different results in the horizontal direction at the interface between layers 1 and 2. The difference depends on the difference between the products ω 1 P 1 and ω 2 P 2 .
Multiple Scattering Radiance
For the second and higher scattering orders, the source function J in Eqs. ( 4) and ( 5) is replaced by: where the subscript "n" stands for the n th order of scattering.Inserting Eq. ( 13) into Eq.( 4) and letting µ tend to 0, we find that the first term of Eq. ( 4) tends to zero, and the second term tends to ( , 0, ) n J τ φ ↓ [7].Therefore, the horizontal radiance for the n th order scattering resulting from the expression for the downward radiance (see Fig. 1) can be written as: and, similarly, the horizontal radiance resulting from the expression for the upward radiance becomes: Therefore, if 1 1 2 2
P P
ω ω ≠ , we have: ( , 0, ) ( , 0, ) ) Thus, again different horizontal radiances are obtained at the interface between the two layers if the IOPs are different in the two layers.Equation ( 16) also holds for the total radiance including all orders of scattering if we replace J n by the total source function J in Eqs. ( 14) and (15).Comparing Eq. ( 16) with ( 12), we easily see that Eq. ( 12) is a special case of Eq. ( 16) for the single scattering case for which n = 1.
Case study
To illustrate the effect of the vertical discontinuity of the atmospheric IOPs on radiative transfer simulations in a plane-parallel atmosphere, we use a simple two-layer model with only aerosols and molecules.The coefficients for aerosol extinction and Rayleigh scattering are assumed to decrease exponentially with height.The scale height of atmospheric density is about 8 km for the US standard atmosphere.For rural aerosols, the scale height is about 1.7 km and a value of 2 km is used in our case study.Thus, we adopt the following expressions for aerosol extinction and molecular scattering (for simplicity we ignore molecular absorption in this study): ,0 exp( / 8) where the subscripts a and m, respectively, stand for aerosol and molecule, and "0" means the coefficients at height z = 0km.The column integrated optical depths due to aerosol extinction and molecular (Rayleigh) scattering are τ a = 2β a,0 , τ m = 8β m,0 .
In the case study, we assume a total aerosol optical depth of 0.4, which is the mean value over China [8].The molecular scattering optical depth for the US standard atmosphere is given by [9]: where λ is the wavelength in micrometers (µm).We use a total Rayleigh scattering optical depth of 0.316, which corresponds to λ = 0.415µm, which is one of the wavelengths used in the Multi-Filter Rotating Shadowband Radiometer (MFRSR) designed for retrieval of aerosol and cloud optical depths [10][11][12][13][14][15].It is also close to the 0.413µm, which is adopted in instruments used in ocean color satellite remote sensing [16,17].The aerosol scattering phase function P a (Fig. 2) for randomly-oriented prolate spheroids with aspect ratio a/b = 4.0, size parameter x = 10.079368, and a refractive index m = 1.55-0.01i is used [18], and P a and the aerosol single scattering albedo ω α are assumed to be constant in each layer.As already mentioned, we further assume that there is no gas absorption.If the whole atmosphere is separated into several layers, and each layer is assumed to be homogeneous, then the optical depth ∆τ x , single scattering albedo ω x , and phase function P x for a layer x located between z 1 and z 2 can be written as: , , x a a x a m x m x x where τ a,x and τ m,x , respectively, are the contributions of aerosol and Rayleigh scattering.
Unless otherwise stated, we use the DISORT [1] radiative transfer code, to investigate the effects of the vertical IOP discontinuities on simulated radiances and irradiances.The whole atmosphere is separated into two layers at z = 2 km, as shown in Fig. 1.Thus, the aerosol optical depth of layers 1 and 2 are 0.147 and 0.253 respectively, and the Rayleigh scattering optical depth of layers 1 and 2 are 0.246 and 0.07, respectively.The solar zenith angle is set to be 53.13degrees (µ 0 = 0.6).All the results shown in this paper are normalized by multiplying by 1/F 0 .
Horizontal radiance at layer boundary
The horizontal radiances at the interface (z = 2 km) between the two layers are shown in Fig. 3 as a function of the aerosol single scattering albedo ω a , and the errors of the horizontal radiances given by the 2-layer model are also illustrated."2km+" and "2km-" denote the horizontal radiances (µ = 0), which are calculated from the downward radiance in the upper layer ("2km+"), and from the upward radiance in the lower layer ("2km-") of the 2-layer model.The "true" value is calculated from the improved successive order of scattering method SOSVRT [5,6], the vertical variation of atmospheric optical properties is properly taken into account (detail in section 3.2).The "true" value is further validated against a DISORT calculation with 80 layers.With such a large number of layers, the discontinuity between layers can be ignored.The radiances for azimuth angles of 0, 90, and 180 degrees are illustrated in Fig. 3, which clearly shows that different horizontal radiances are obtained at the interface between the two layers from the downward radiance in the upper layer ("2km+") than from the upward radiance in the lower layer ("2km-").In the upper layer, there is more molecules than aerosols, and the molecules decrease more slowly, while in the lower layer, most of the scattering is due to aerosols.When the atmospheric layer is reconstructed as a vertically homogeneous layer, the vertical distribution of IOPs in the lower layer has a stronger variation than that in the upper layer, which explains why the radiances labeled "2km+" are more accurate than those labeled "2km-".
For radiances in directions other than the horizontal direction, the discontinuity is not easy to see because the radiances are forced to be the same by the layer interface condition: where the subscripts 1 and 2 denote layers 1 and layer 2 respectively.Why are different radiances in the same direction µ = 0 produced by the 2-layer model?The reason is that the atmosphere is broken into two different layers, but each layer is assumed to be homogenous.Thus, this 2-layer model is a very crude representation of the true vertical variation of the atmosphere.As a result, the atmospheric IOPs on one side of the interface between the 2 layers are different from the IOPs on the other side.This artificial discontinuity in atmospheric IOPs results in different horizontal radiances.Figure 3 shows that the differences increase with the increase in aerosol absorption (smaller single scattering albedo).Because we assume there is no molecular absorption in atmosphere, the smaller ω α results in bigger difference of the ω (or ωP) between the two layers and larger errors in the radiances.
Although the plane-parallel assumption is not applicable for direction near the horizon, the errors in the radiances at large zenith angles could introduce extra errors to radiances at small zenith angles, because the source functions due to multiple scattering are derived by integration over all polar angles.Such errors can be a problem for retrieval algorithms which use wavelengths in bands with strong gaseous absorption.For example, in high spectral resolution measurement of the oxygen A-band, the absorption optical depth varies from 0 to 10 or even 100 for ultra high spectral resolution [19], and the absorption coefficient of oxygen varies sharply with height, therefore, the single scattering albedo in a layer could be very small (close to 0), or close 1 due to the contribution of the oxygen absorption.
In real atmospheres, the atmospheric IOPs vary continuously with height.We ignore this continuity when we separate the atmosphere into a limited number of homogenous layers.Different concentrations of aerosols and molecules present in each layer, lead to different values of ω x and P x for the two layers, resulting in the difference in radiances.For example, if the lower layer contains absorbing aerosols (ω a = 0.5), whereas the upper layer contains molecules only, the single scattering albedo just below the interface of the two layers is 0.5, while it is 1.0 just above the interface.
Effects on calculation of radiances and irradiances
Generally, we are not interested in the radiance close to the horizontal direction in planeparallel atmospheres, but it is important to quantify the errors in radiances and irradiance incurred by the artificial discontinuity resulting from dividing the atmosphere into a small number of layers.Figure 4 illustrates the relative errors in radiances due to inadequate resolution of the vertical variation of atmospheric IOPs in the two-layer model.Only radiances for view zenith angles less than 75°, where the plane-parallel assumption is applicable, are plotted.This figure clearly shows that inadequate resolution of the vertical variation in the two-layer model can lead to significant errors in the radiances at small zenith angles, not only for radiance at the interface between layers, but for radiance of all levels including the TOA and the surface.The errors can be up to 10% or even larger, which becomes a serious concern in the development remote sensing algorithms.
Based on the two-layer model, the upward irradiance at the TOA and the downward irradiance at the surface are illustrated in Fig. 5 (left axis), relative errors are also plotted (relative to the right axis).For the irradiances at the TOA, errors of 20% are possible for aerosol with strong absorption and several percent for irradiances at other altitudes.To reduce the errors, more layers are needed in the radiative transfer simulation.To investigate how many layers are needed in the simulation, the whole atmosphere is divided into many layers based on Eq. ( 17) and each layer is assumed to be homogenous and has the same total optical depth.The optical parameters ω, ∆τ and P for each are given by Eqs.(20), (21).The "true" value is calculated from the improved version of SOSVRT by direct integration of the source function over τ with a small step size δτ; in the case study of this paper, we used δτ ≈0.002.For each integration step between τ i and τ i + δτ, the extinction coefficients β a and β m at three points, τ i , τ i + δτ/2 and τ i + δτ are computed through Eqs.(3), ( 17) and (18), and the single scattering albedos ω and phase functions P at the three points are given through Eqs. ( 21) and ( 22) by replacing optical depth ∆τ with extinction coefficient β.Allowing for this vertical variation within each layer in the SOSVRT is the main difference from a traditional n-layer model in which each layer is assumed to be homogeneous.Then the source functions can be calculated with Eq. ( 7) or ( 13) and integrated to compute the radiance by assuming it varies linear-exponentially with τ [5].Therefore, the vertical variation of atmospheric optical properties is properly taken into account.Figure 6 illustrates the maximum error in radiances and irradiances as a function of the number of layers.The results show that the stronger the aerosol absorption, the more layers are needed.For the case discussed in this paper, the number of layers needed to ensure 1% accuracy of radiances for zenith angles less than 75 degrees are 4, 5, 6, 7 for aerosol scattering albedo of 0.95, 0.8, 0.7 and 0.5, respectively, while 2, 4, 5 and 6 layers are needed to ensure 1% accuracy of irradiances.
Conclusion
In numerical simulations of radiative transfer, the atmosphere is often divided into many layers and each layer is assumed to be homogenous by neglecting the vertical variation of atmospheric inherent optical properties (IOPs) within each layer.This assumption introduces an artificial discontinuity of atmospheric IOPs and horizontal radiances at layer interfaces, and results in errors in radiances and irradiances also at small zenith angles and at all levels in the atmosphere.The bigger the difference in IOPs between layers, the larger the errors in radiances and irradiances.A simple two-layer radiative transfer model is used to investigate the impact of this discontinuity in atmospheric IOPs on radiances and irradiances.If the presence of strongly absorbing aerosols in atmosphere, the errors in radiances may be up to 10% for zenith angle less than 75°, and several percent in irradiances.Errors of this magnitude require serious consideration in the development of remote sensing algorithms and in climate modeling.For example, in radiative transfer simulations in absorption bands such as the oxygen A-band, which require high spectral resolution, and in atmospheres with strongly absorbing aerosols, such as particles including black carbon, the vertical variation in the IOPs must be properly taken into account.
The artificial separation of the atmosphere into several homogeneous layers with different IOPs, leads to errors in simulated radiances and irradiances.Although errors illustrated in this paper based on a two-layer model are quite large, they could be reduced or even eliminated by introducing a sufficient number of layers or by assuming a continuous variation of the IOPs.Therefore, radiative transfer codes such as DISORT, SOSVRT, PolRadtran, 6S, MODTRAN etc are expected to be accurate enough if the atmosphere is divided into a sufficient number of layers.
Fig. 3 .
Fig. 3. Horizontal radiances (left three panels) and relative errors (right three panels) at z = 2km computed from downward radiance in the upper layer (2km + ) and upward radiance in the lower layer (2km-) with the two layer model.The true value is derived from the improved SOSVRT using a continuous IOP profile.The aerosol optical depths for layer 1 and 2 are 0.147 and 0.253, respectively, and the Rayleigh scattering optical depths for layer 1 and 2 are 0.246 and 0.070, respectively.The IOPs of each layer is given by Eqs.(20)-(22), and the cosine of the solar zenith angle was µ0 = 0.6.
Fig. 4 .
Fig. 4. Relative differences in upward radiances at TOA and downward radiances at the surface produced by the 2-layer model for three different values of the aerosol single scattering albedo ωa.The optical properties of the two layers are the same as shown in Fig. 3.
Fig. 6 .
Fig. 6. , Maximum error of radiance and irradiances versus the number of layers used in the simulation. ) | 5,066.6 | 2010-03-15T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Utilizing User-Generated Content and GIS for Flood Susceptibility Modeling in Mountainous Areas: A Case Study of Jian City in China
: Floods are threats seriously affecting people’s lives and property globally. Risk analysis such as flood susceptibility assessment is one of the critical approaches to mitigate flood impacts. However, the inadequate field survey and lack of data might hinder the mapping of flood susceptibility. The emergence of user-generated content (UGC) in the era of big data provides new opportunities for flood risk management. This research proposed a flood susceptibility assessment model using UGC as a potential data source and conducted empirical research in Ji’an County in China to make up for the lack of ground survey data in mountainous-hilly areas. This article used python crawlers to obtain the geographic location of the floods in Ji’an City from 2016 to 2019 from social media, and the state-of-the-art MaxEnt algorithm was adopted to obtain the flood occurrence map. The map was verified by the flood data crawled from reliable official media, which achieved an average AUC of 0.857% and an overall accuracy of 93.1%. Several novel indicators were used to evaluate the importance of conditioning factors from different perspectives. Land use, slope, and distance from the river were found to contribute most to the occurrence of floods. Our findings have shown that the proposed historical UG C-based model is practical and has good flood-risk-mapping performance. The importance of the conditioning factors to the occurrence of floods can also be ranked. The reports from stakeholders are a great supplement to the insufficient field survey data and tend to be valuable resources for flood disaster preparation and mitigation in the future. Finally, the limitations and future development directions of UGC as a data source for flood risk assessment are discussed.
Introduction
Flood is a very common and quite destructive natural disaster around the world, which is usually triggered by intense but short-term precipitation events [1]. Due to factors such as terrain and climate, floods are especially prone to occurring in mountainous-hilly areas [2][3][4][5]. In China, mountainous areas account for about two-thirds of the country's land area, and nearly half of Chinese towns are located in such areas. Among the total flood-related deaths, deaths in mountain areas accounted for more than 70% [6,7]. Several southeastern provinces of China (e.g., Jiangxi, Fujian, and Guangdong) suffered the most from flooding due to the hilly terrain and high annual precipitation. In these floodprone areas, flood can damage massive houses and crops, causing substantial wealth loss and increasing the possibility of regional poverty [8]. Due to the high risk of flood disasters, flood monitoring and assessment have become necessary strategies for these towns to formulate a sustainable land-use plan and increase urban resilience against climate change [9].
Proper evaluation of flood risk in rural and mountainous areas is challenging. First of all, quantification of flooding susceptibility is a multifaceted process. Floods not only occur
Study Area
Ji'an County (Figure 1) is one of the central cities of Jiangxi Province, where the G River runs from south to north, bringing abundant rainfall. Ji'an's geographical exten between latitudes of 25°58′ to 27°57′ N and longitudes of 113°46′ to 115°56′ E. The city i typical mountainous and hilly landform, surrounded by mountains with the Jitai Basin the middle, whose elevation ranges from 100 m to 1542 m. The region is composed of administrative counties, with a total of 213 towns, covering a total area of about 20,0 km 2 , and holds a population of 5.4 million. The climate of the study area is dominated a subtropical monsoon climate. The mean annual precipitation is about 1504 mm with mean temperature of approximately 17.8 degrees according to the long-term (1988-20 data from weather stations. Over the past five years, the city experienced catastrophic floods almost every year, causing huge losses to the lives and property of the people in the village (Table 1). In the flood event of 2019, from June 6 to June 9, the torrential rains and cyclonic storms combined together to produce the most devastating flood disaster. According to the statistics, the flood killed more than 15 people, collapsed more than 1300 houses, affected Although the flood events passed, the traces caused by floods remain on the Internet. Online reports not only have words and pictures but are also full of videos and sympathy for the victims. Thus, it is urgent to evaluate the flood susceptibility for the city's future development.
Research Method
Scientific communities have developed various approaches to assess flood hazards and quantify flood susceptibility [28]. The earlier approaches included subjective expert knowledge, frequency ratio, weighting factor, Shannon's entropy, discriminant analysis, bivariate or multivariate regression, generalized linear model, logistic regression, etc. [29]. Recently, several complex and more intelligent machine learning methods, such as artificial neural networks (ANNs), support vector machines (SVMs), random forest (RF), and decision trees (DTs), are proposed for flood assessment [4,30,31]. All these statistical methods are fit for flood susceptibility assessment.
Maximum entropy (MaxEnt) model was chosen particularly here because the model is practical and requires relatively fewer training data and no need to generate non-flood points for supervised classification. The MaxEnt model is an advanced machine learning algorithm and was first used by scholars to study the distribution of animal and plant populations [32][33][34]. The model was soon adopted by scholars in other research fields, including flood sensitivity assessment research, and proved to be very accurate [35].
Here, it was used to verify the performance of UGC as source data to evaluate flood susceptibility in mountainous areas.
The procedural approach ( Figure 2) taken in the present research can be summarized as (i) collection and preparation of the required data for the flood modeling in the study area; (ii) retrieval of the flood historical events in UGC to determine geographical location of floods; (iii) identification of key factors affecting flood occurrence and susceptibility mapping; (iv) reliability assessment of the model by the division of training data, reported by ordinary users and test data, reported on official websites; (v) statistical analysis of flood susceptibility map and policy recommendations.
Data Collection
The Internet environment in China is different from that of many countries in the world. For instance, Twitter, Facebook, YouTube, and WhatsApp are not popular in the country. Compared to the Internet environment in other countries, the participation platform in China is diverse. Individuals and business enterprises use social media tools such as WeChat and Sina Blog to release news and disaster situation information on the web. Several social networking sites (SNSs) such as Zhihu (similar to Quora), Baidu Baike (similar to Wikipedia), and Tianya communities (a bulletin board service) are also very popular among users for uploading and sharing disaster information. The users' observations posted on the web are usually not well organized in a structured way. The information relevant to the specific flood event is inundated with irrelevant information. Fortunately, data mining technology and AI can help us retrieve unstructured data and effectively search the relevant UGC for flood inventory.
Data Collection
The Internet environment in China is different from that of many countries in the world. For instance, Twitter, Facebook, YouTube, and WhatsApp are not popular in the country. Compared to the Internet environment in other countries, the participation platform in China is diverse. Individuals and business enterprises use social media tools such as WeChat and Sina Blog to release news and disaster situation information on the web. Several social networking sites (SNSs) such as Zhihu (similar to Quora), Baidu Baike (similar to Wikipedia), and Tianya communities (a bulletin board service) are also very popular among users for uploading and sharing disaster information. The users' observations posted on the web are usually not well organized in a structured way. The information relevant to the specific flood event is inundated with irrelevant information. Fortunately, data mining technology and AI can help us retrieve unstructured data and effectively search the relevant UGC for flood inventory.
A web crawler combined with a natural language process (NLP) and social media application programming interface (API) was designed and implemented to help us identify the locations of flood events. The crawler searched 5689 records relating to the historical flood events in Ji'an, which were obtained from 248 websites and 125 social media accounts (BBSs, blogs, and microblogs), including authoritative media (e.g., Ji'an Evening News, Xinhua, government official website). The result of the crawler was carefully reviewed by three individuals on their types of posts, time, and geographical locations to remove repetitive and error-prone points. After filtering, 242 disaster points uploaded by users during 2016-2019 were considered very reliable, and their locations were labeled on a map ( Figure 3). Of these 242 sites, 162 were reported to occur in villages, 15 were located on the street, and 65 occurred in other places such as a certain residential area, parking lot, or a building, etc. (Table 2). A web crawler combined with a natural language process (NLP) and social media application programming interface (API) was designed and implemented to help us identify the locations of flood events. The crawler searched 5689 records relating to the historical flood events in Ji'an, which were obtained from 248 websites and 125 social media accounts (BBSs, blogs, and microblogs), including authoritative media (e.g., Ji'an Evening News, Xinhua, government official website). The result of the crawler was carefully reviewed by three individuals on their types of posts, time, and geographical locations to remove repetitive and error-prone points. After filtering, 242 disaster points uploaded by users during 2016-2019 were considered very reliable, and their locations were labeled on a map ( Figure 3). Of these 242 sites, 162 were reported to occur in villages, 15 were located on the street, and 65 occurred in other places such as a certain residential area, parking lot, or a building, etc. ( Table 2).
The crawled flood data were divided into two categories. One category was 191 points in total floods, reported on social media, such as microblogs, WeChat, Tieba, etc., and uploaded by ordinary users' accounts. There were a total of 191 flood points, and they were used to train the model. The other category was flood events published on a government official website or from authentic news media such as Ji'an Headlines, Ji'an Evening News, etc. There were 51 points in total, which were saved for model validation.
Conditioning Factors
Flood susceptibility assessment requires comprehensive consideration of various factors, including watershed features, storm characteristics, and regional characteristics. The selection of conditioning factors should take into account both the natural topography and land-use type. Based on previous studies, eight conditioning factors including elevation, slope angle, aspect, curvature, rainfall, NDVI, LULC (land use/land cover), and distance from rivers were selected to establish our flood model. The crawled flood data were divided into two categories. One category was 191 points in total floods, reported on social media, such as microblogs, WeChat, Tieba, etc., and uploaded by ordinary users' accounts. There were a total of 191 flood points, and they were used to train the model. The other category was flood events published on a government official website or from authentic news media such as Ji'an Headlines, Ji'an Evening News, etc. There were 51 points in total, which were saved for model validation.
Conditioning Factors
Flood susceptibility assessment requires comprehensive consideration of various factors, including watershed features, storm characteristics, and regional characteristics. The selection of conditioning factors should take into account both the natural topography and land-use type. Based on previous studies, eight conditioning factors including elevation, slope angle, aspect, curvature, rainfall, NDVI, LULC (land use/land cover), and distance from rivers were selected to establish our flood model. The factors were derived from different data resources such as DEM (Digital Elevation Model), weather stations and publications, and satellite imagery. The data were downloaded from Geospatial Data Cloud (http://www.gscloud.cn/ (accessed on 17 March The factors were derived from different data resources such as DEM (Digital Elevation Model), weather stations and publications, and satellite imagery. The data were downloaded from Geospatial Data Cloud (http://www.gscloud.cn/ (accessed on 17 March 2020)) or National Geomatics Center of China (NGCA), respectively. The slope angle, slope aspect, altitude, plan curvature, and profile curvature were extracted from ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) Global DEM. LULC types were prepared from remote sensing image data. Distance-from-rivers map was derived from the river distribution shapefile using the Euclidean distance tool in ArcGIS. Table 3 summarizes the factors, data sources, and factor classes used in this study. All the data layers were prepared in raster format with a spatial resolution of 30 m × 30 m.
Rainfall is a trigger factor leading to floods. The magnitude of rainfall directly affects the severity of flood disaster. The spatial distribution of rainfall was obtained from a 30+ year quasi-global rainfall dataset, called Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), which incorporates satellite imagery with in situ station data to create gridded rainfall time series. This study filtered out the maximum daily rainfall between 2016 and 2019 from the dataset in Google Earth Engine and used it for further analysis. Elevation is also an important factor contributing to the occurrence of floods. Generally, water flows downward due to the force of gravity, rainfall accumulates in the low-lying areas, and thus lower elevation is more prone to flooding. Slope is another factor frequently used in flood susceptibility assessment [36][37][38]. Slope directly affects surface Sustainability 2021, 13, 6929 7 of 18 runoff velocity and vertical infiltration, thus affecting flood susceptibility. Note that areas with smaller slopes are prone to floods. Aspect will affect precipitation, sunshine hours, and soil moisture content, which indirectly affects the occurrence probability of flood [39,40]. Curvature represents the degree and direction of a curved surface, which determines whether the water flows in a convergent or divergent manner. LULC is considered essential for identifying flood-prone areas [41]. LULC affects runoff speed, sequestration, infiltration, and evaporation transport [42]. Urban and impervious surfaces will increase rainwater runoff. NDVI describes the vegetation density of a region. It is generally believed that vegetation has an influence on both the surface runoff and infiltration capability of land [43]. NDVI was extracted from Landsat 8 OLI images by analyzing the spectral reflectance measurements obtained in the visible (red) and near-infrared regions. River is also closely related to floods. After a precipitation event, when discharge increases and overtops its bank, floods may occur in the surrounding area [44]. Linda also concluded that river networks play an important role in floods [45]. The closer a settlement is to the river, the more vulnerable it is to floods. Figure 4 shows the maps of each conditioning factor.
MaxEnt Modeling for Flood Occurrence
Maximum entropy model (i.e., MaxEnt model) is a machine learning model that looks for the most dispersed or the closest uniform distribution method to predict the probability of occurrence of things under the condition of satisfying constraint rules. The model only considers the known sample information in the calculation. By superimposing the geographic location of the flood with all the input conditioning factors, a large number of sample points is randomly generated, and then the corresponding relationship with the conditioning factors is established to generate constraint rules. The model has two main components: one is the entropy value, which is used to establish the objective function; the other is the constraint, which is used to calibrate the model.
Assuming that the probability variable X ∈{x 1 , x 2 . . . , x n } of flood occurrence in the study area, its probability distribution is p(X = x i ) = p i , i = 1, 2, . . . , n. Then the entropy of the variable X is defined as: H(X) depends on the distribution of X and has nothing to do with the specific value of X. After introducing various conditioning factors Y (Y ∈ {y 1, y 2, . . . , y k }), the entropy of the variable X under the condition known as Y is: We used the spatial analysis tools in ArcGIS software to calculate the geographic coordinates of the flood points and input them into MaxEnt model. According to the principle of maximum entropy, the objective function of the maximum entropy (MaxEnt) model is: The maximum entropy model continuously adjusts the parameter values through the random seed generation algorithm to find the optimal solution. To prevent the result from falling into a local optimum, this study used the average value of multiple training results as the final result and obtained the flood susceptibility map.
Quality Assessment and Validation
The accuracy detection of the flood susceptibility map is important for verifying the effectiveness of UGC.
Several statistical indices, such as confusion matrix, receiver operating characteristic (ROC), and area under curve (AUC), were used to assess the performance of a classifier [46]. Four parameters in confusion matrix, namely true positive (TP), true negative (TN), false positive (FP), and false negative (FN), widely used in different types of evaluation models, such as decision tree, logistic regression, and linear discriminant analysis [7,47], were calculated (see Table 4). According to the confusion matrix, the indexes for model validity can be calculated. The overall accuracy (OA) represents the proportion of samples that predict correctly (TP and TN) in all samples.
OA =
TP + TN TP + TN + FP + FN OA can directly reflect the correct proportion of points, but when the number of samples in each category is not balanced, it is necessary to use the Kappa coefficient to evaluate the accuracy of the model, which is calculated as follows.
Conditioning Factor Statistics
Before modeling, the independence of variables was tested using the Statistical Package for the Social Sciences (SPSS). When two or more variables are highly correlated, multicollinearity appears, which is an issue that should be seriously considered. In statistics, two parameters called tolerance (TOL) and the variance inflation factor (VIF, the reciprocal of the TOL) are widely used to indicate whether multicollinearity appears. In general, if the value of TOL is greater than 0.1 and VIF is less than 10, then it indicates that there is no multicollinearity among the variables. Herein, according to the criteria, the model satisfies the requirements of no multicollinearity (Table 5). After the collinearity test, a statistical analysis was carried out to intuitively understand how factors affect the occurrence of floods. Factors of maximum daily rainfall, DEM, slope, and distance from river were divided into five categories according to the natural breakpoint method. Aspect, land use, and curvature were categorized according to their natural attributes.
It can be seen from the first category in Table 6 that floods mainly occur in areas where the maximum daily rainfall exceeds 150 mm. As the precipitation increases, the number of flood points also increases. In the DEM category, 52.47% of the collected floods were found to occur at an altitude of less than 80 m, and the majority of the floods occur at elevations from 48 m to 160 m. For slope, most of the flood events occurred between slope values of 0 • and 14 • , accounting for 94.63% of the total flood occurrence. The aspect has nine directions: Flat, North, Northeast, East, Southeast, South, Southwest, West, and Northwest. However, no matter which direction, they contained a certain number of the flood occurrence points, and no obvious tendency was found. Therefore, for the occurrence of floods, slope seems to be more important than aspect in Ji'an. Curvature can be divided into concave, flat, and convex. The concave and flat areas are more prone to flooding than the convex surface. Note that 81 flood points were located on a convex surface, but according to our statistic, the convex curvatures of these points were all very small and the maximum value did not exceed 0.04. Land use was identified to have five categories: construction land, cultivated land, woodland, lawn, and water area. Data showed that the majority of flood points were located on construction land (accounted for 67.36%), while woodland was less likely to experience floods. Statistics from NDVI in the table showed that only a few points were located in areas with a high NDVI value. In the last category of the table, distance from river, we found that the farther away from the river, the fewer points the flood was reported at.
Susceptibility Map
The MaxEnt model has requirements of the conditioning factors' data format. All the raster images should be processed into ASCII images with exactly the same geographic extents, the same number of rows and columns, and the same value of pixel size. To reduce the error, we repeated the experiment three times and averaged the results to generate flood probability map, as shown in Figure 5A. As can be seen from the figure, the high-risk areas of floods are mainly distributed in the middle of Ji'an County and have a meandering shape. Overlaying Figures 4f and 5A in ArcGIS, it can be seen that the high-risk areas predicted by the model were places where construction land was highly concentrated. In addition, surrounding areas of Ji'an County were predicted to have low flood risk, where woodland is the main type of land use.
To get a hierarchical map for further data analysis, the natural breakpoint method was used to divide the level of flood risk ( Figure 5A) into four categories, as shown in Figure 5B. The predicted high-risk areas for flood in Ji'an County were concentrated in Jizhou, Ji'an, Taihe, and Yongfeng districts. Comparing with Figure 4a, it could be seen that the maximum rainfall is also heavily concentrated in these four districts.
Data statistics of Figure 5B are listed in Table 7. The high risk and high susceptibility level cover an area of 7221.9 km 2 , accounting for 29.29% of the total land area. The flood disasters were densely distributed in these areas and contained 86% of the total flood points. The data also showed that there are 10,345.8 km 2 of land located in a low-risk area, accounting for 41.9% of the total land area which means residents living in these areas do not have to worry too much about the occurrence of floods. As can be seen from the figure, the high-risk areas of floods are mainly distributed in the middle of Ji'an County and have a meandering shape. Overlaying Figure 4f and 5A in ArcGIS, it can be seen that the high-risk areas predicted by the model were places where construction land was highly concentrated. In addition, surrounding areas of Ji'an County were predicted to have low flood risk, where woodland is the main type of land use.
To get a hierarchical map for further data analysis, the natural breakpoint method was used to divide the level of flood risk ( Figure 5A) into four categories, as shown in Figure 5B. The predicted high-risk areas for flood in Ji'an County were concentrated in Jizhou, Ji'an, Taihe, and Yongfeng districts. Comparing with Figure 4a, it could be seen that the maximum rainfall is also heavily concentrated in these four districts.
Data statistics of Figure 5B are listed in Table 7. The high risk and high susceptibility level cover an area of 7221.9 km 2 , accounting for 29.29% of the total land area. The flood disasters were densely distributed in these areas and contained 86% of the total flood points. The data also showed that there are 10,345.8 km 2 of land located in a low-risk area, accounting for 41.9% of the total land area which means residents living in these areas do not have to worry too much about the occurrence of floods.
Accuracy Test
Several statistical indices were used to test the accuracy of the model. Figure 6 shows the receiver operating characteristic (ROC) curve, which was averaged over the three
Accuracy Test
Several statistical indices were used to test the accuracy of the model. Figure 6 shows the receiver operating characteristic (ROC) curve, which was averaged over the three times' model results. The average test AUC for the three times' runs is 0.857. As can be seen in the figure, the thickness of the curve in the picture denoted the fluctuation range of the ROC curve of the three times' experiments, and the standard deviation of the three times' results is 0.027, which means that even in the worst experiment of the three, the AUC of the model is 0.83. In general, an AUC of 0.5 suggests no discrimination (i.e., ability to judge areas prone or not prone to flooding based on the test), 0.7 to 0.8 is considered acceptable, 0.8 to 0.9 is considered excellent, and more than 0.9 is considered outstanding. Thus, the predictive ability of our model is excellent.
To derive the confusion matrix, 51 flood points we saved from reliable and official media and 51 non-flood points generated by random seed algorithm were used. We set the flood sensitivity threshold to 0.7 to generate a binary map of flood occurrence. AUC of the model is 0.83. In general, an AUC of 0.5 suggests no discrimination (i.e., ability to judge areas prone or not prone to flooding based on the test), 0.7 to 0.8 is considered acceptable, 0.8 to 0.9 is considered excellent, and more than 0.9 is considered outstanding. Thus, the predictive ability of our model is excellent. To derive the confusion matrix, 51 flood points we saved from reliable and official media and 51 non-flood points generated by random seed algorithm were used. We set the flood sensitivity threshold to 0.7 to generate a binary map of flood occurrence.
A manual inspection was conducted on the 102 points. Among the 51 reference pixels of flood, 46 points were correctly predicted and 5 points were predicted as non-flood. Among the 51 reference pixels of non-flood, 49 points were correctly predicted and 2 were misclassified. The specific number was listed as the confusion matrix shown in Table 8.
As can be seen from Table 8, the overall accuracy of the model is 93.14% and the Kappa coefficient is 0.916, which verified the overall reliability of the model. The performance of the user's accuracy was 95.8% and 90.7% for flood and non-flood, respectively. The producer's accuracy was 90.2% and 96.07% for flood and non-flood, respectively. The producers' accuracies and the user's accuracies were all above 90%, A manual inspection was conducted on the 102 points. Among the 51 reference pixels of flood, 46 points were correctly predicted and 5 points were predicted as non-flood. Among the 51 reference pixels of non-flood, 49 points were correctly predicted and 2 were misclassified. The specific number was listed as the confusion matrix shown in Table 8. As can be seen from Table 8, the overall accuracy of the model is 93.14% and the Kappa coefficient is 0.916, which verified the overall reliability of the model. The performance of the user's accuracy was 95.8% and 90.7% for flood and non-flood, respectively. The producer's accuracy was 90.2% and 96.07% for flood and non-flood, respectively. The producers' accuracies and the user's accuracies were all above 90%, which means the model can make good predictions for each sub-category, verifying the reliability of the model.
Analysis of Variable Contributions
In this section, quantitative analysis of indices was adopted to evaluate the contribution of each influencing factor. Percent contribution of each conditioning factor was calculated. The index represents the normalized cumulative value of the variable gain in each iteration. To determine the value of the index, the increase in regularized gain is added to the contribution of the corresponding variable in each iteration of the training algorithm or subtracted from it if the change to the absolute value of lambda is negative. After the gains of all variables were accumulated, the values were normalized to percentages. Permutation importance measures the increase in the prediction error of the model after features are permuted. To determine the value of the index, the values of a certain variable on training presence and background data were randomly permuted for each conditioning variable in turn. After permuting, the model was reevaluated on the permuted data, and the resulting drop in training AUC, which was normalized to percentages, was calculated and listed in Table 9. As can be found in the table, the top three variables that contributed most to flood risks were land use, slope, and distance from river. Their contribution rate was 54.6%, 16.7%, and 14.3%, respectively. According to our previous qualitative analysis, it could be inferred that areas covered with construction surface, located on flat terrain, and close to a river were at extremely high flood risk. As could be seen, although the contribution rate of slope factor was not as high as land use, the permutation importance showed that permuting the slope variable would bring about 38% of the model error, which made it the most indispensable variable among all variables. Rainfall factor ranked fourth in percent of contribution, which implied the impact of precipitation on flood risk was also worth noting. The DEM factor made 3.6% of the contribution to predicting flood disasters, but removing this variable would only bring about 1.9% error to the model, ranking last in permutation importance among all variables. The curvature factor did not make much contribution to predicting flood disasters (only 3.2%), but removing this variable would bring a considerable 13.7% error to the model, ranking fourth among all variables. The contribution rate of the factor NDVI ranked last in percent of contribution, but it has some permutation importance. Taking percent contribution and permutation importance together, it seemed that the aspect factor would be the least important factor of all factors, which was consistent with our analysis of flood point statistics. Figure 7 shows the results of the jackknife test of variable importance. The jackknife is a resampling technique that estimates parameters by systematically leaving out each factor from a dataset and calculating the values and then finding the average of these calculations. The blue bar represents the gain of the model with the only factor, the red bar represents the gain with all factors, and the green bar represents the gain loss without the factor.
The conditioning factor with the highest gain was again land use when used in isolation of the jackknife test, which therefore appears to have the most useful information by itself. In addition, when land use was omitted, the gain decreased the most, which made it the factor that has the most information that is not present in the other variables. It is worth noting that the DEM and slope factor independently contained more than 40% of the useful information, which was a supplement to our above analysis. the factor.
The conditioning factor with the highest gain was again land use when used in isolation of the jackknife test, which therefore appears to have the most useful information by itself. In addition, when land use was omitted, the gain decreased the most, which made it the factor that has the most information that is not present in the other variables. It is worth noting that the DEM and slope factor independently contained more than 40% of the useful information, which was a supplement to our above analysis.
Discussion
Understanding the factors that contribute to flood occurrence and mapping the susceptibility to flood disasters is fundamental in managing flood hazards. UGC, as a new scientific data collection method, has begun to draw attention in flood management. Aided by information technology, such as smartphones and web applications, stakeholders including ordinary residents have more abilities to help observe flood phenomena. Therefore, it is crucial for scientists and management to integrate stakeholders' contributions into flood hazard management, especially in those areas where the traditional monitoring network is not well covered. Based on these fundamental principles, this study presented the results of a comprehensive flood susceptibility assessment using UGC as the data source for the Ji'an areas. Using social media users' observations and reports, a flood susceptibility map was obtained through the MaxEnt model and described the probability of the flood occurrence. A statistical analysis of flood points was conducted, and the importance of eight conditioning factors was analyzed qualitatively and quantitatively.
Percent contribution showed that land use, slope, and distance from river are the top three factors that contribute most to flood occurrence. Permutation importance value indicated that slope is the most indispensable factor when making the flood susceptibility
Discussion
Understanding the factors that contribute to flood occurrence and mapping the susceptibility to flood disasters is fundamental in managing flood hazards. UGC, as a new scientific data collection method, has begun to draw attention in flood management. Aided by information technology, such as smartphones and web applications, stakeholders including ordinary residents have more abilities to help observe flood phenomena. Therefore, it is crucial for scientists and management to integrate stakeholders' contributions into flood hazard management, especially in those areas where the traditional monitoring network is not well covered. Based on these fundamental principles, this study presented the results of a comprehensive flood susceptibility assessment using UGC as the data source for the Ji'an areas. Using social media users' observations and reports, a flood susceptibility map was obtained through the MaxEnt model and described the probability of the flood occurrence. A statistical analysis of flood points was conducted, and the importance of eight conditioning factors was analyzed qualitatively and quantitatively.
Percent contribution showed that land use, slope, and distance from river are the top three factors that contribute most to flood occurrence. Permutation importance value indicated that slope is the most indispensable factor when making the flood susceptibility map. Jackknife test revealed that the land-use factor contains the most useful information to evaluate flood risk that is not present in the other variables. The flood points crawled from official and authentic media verified the accuracy of the UGC-generated map. These flood points were mostly located in areas prone to flood, and the confusion matrix showed an overall accuracy of 93.14% of the map (see Table 8). In addition, the model achieved a satisfactory result with an ROC value of 85.7%.
The mapping reminds us that the high-risk areas are mainly distributed in those communities that are close to rivers. Four administrative districts, i.e., Ji'an, Jizhou, Taihe, and Yongfeng, were identified to have the highest flood risk. The information in the map is very valuable for disaster reduction. For instance, if an area is assigned with "high values" in the susceptibility map, flood management such as drainage system improvement should be given priority in these areas. Large areas of impervious surfaces should not be planned for construction in high-flood-risk areas, such as areas with low slope terrain close to the river. In addition, an increase in the use ratio of woodland in the city can reduce the risk of flood hazards.
The case study shows that the model was able to recognize high-risk areas with few reports. Due to the adverse effects of floods on the lives of local residents, locations with frequent floods are likely to respond on social media. In a mountainous area, such as Ji'an City, where data are difficult to obtain, using data generated by users on the network as a data source can also help city managers identify flood risks. The flood events on the network provide a valuable resource for scientific research and disaster recovery and they can break through the bottleneck of data quantity to analyze the flood disaster in mountainous areas. Thus, UGC in flood management was of great value and particularly significant for planning purposes or for establishing land-use regulations.
In the age of the Internet, scientists should integrate users' contribution to flood disaster management. Although the Internet landscape in China is different from other countries, the role and contents of UGC on social media of different countries are similar and can be used in hydrological monitoring, estimating flood inundation extent, and flood event detection for effective disaster risk management. Our analysis provides useful insights for flood susceptibility assessment. The results of our model are satisfying, which provides a way for other mountainous cities to carry out the research.
Using network media data as a data source is still facing some challenges. First of all, the spatial distribution of flood events tends to be in areas with high population density, and media reports are more likely to be concentrated in regions with higher economic losses. Secondly, the number of potential contributors affects the effectiveness of the method, while the regional smartphone and Internet penetration affect the number of potential contributors. In some areas, there may be fewer people using the network to publish location information. In addition, although collecting social media data is more labor-saving than field survey of flood occurrence, crawling data still need to be verified manually and removing repeated reports, which reduces the efficiency of flood assessment. More state-of-the-art natural language processing algorithms need to be applied to the process of extracting geographic locations of floods. The efficiency and accuracy comparison between the flood susceptibility model of UGC and field survey data can be conducted in future studies.
Conclusions
Floods are the most frequent type of natural disaster that seriously affects people's lives and property globally. Food susceptibility assessment is one of the critical approaches to mitigate flood impacts. The inadequate field survey and lack of data hinder the assessment of flood sensitivity in mountainous and hilly areas. The effectiveness of using UGC reported on social media as source data in flood sensitivity assessment in mountainous areas remains unknown. This study used different types of UGC on the web (i.e., text, photo, video) across web platforms (websites, blogs) to model flood susceptibility in a mountainous-hilly area that is severely affected by floods. The application of UGC in this study was novel, and the state-of-the-art MaxEnt algorithm was adopted to draw the susceptibility map. Moreover, several indicators commonly used in the field of machine learning were used here to evaluate the importance of each conditioning factor. The results reveal that UGC is of great value for flood susceptibility assessment and proved to be an effective data source. The proposed model is practical and has high accuracy. Factors of land use, slope, and distance from river were found to contribute most to the occurrence of floods in this area. The accumulative UGC can be used as an important supplement to the insufficient field survey data. Thus, in the future, flood management should pay more attention to bringing in stakeholders' contribution and public participation. The limitations of UGC are worth noting, that is, the spatial distribution of floods will be affected by population density and smartphone penetration. More efficient algorithms for mining flood-related UGC data need to be studied in the future, and a better mechanism should be established to motivate users to participate more actively in flood disaster management. | 9,051 | 2021-06-19T00:00:00.000 | [
"Environmental Science",
"Geography",
"Computer Science"
] |
Improved whale optimization algorithm and its application in heterogeneous wireless sensor networks
Aiming at the problems of node redundancy and network cost increase in heterogeneous wireless sensor networks, this article proposes an improved whale optimization algorithm coverage optimization method. First, establish a mathematical model that balances node utilization, coverage, and energy consumption. Second, use the sine–cosine algorithm to improve the whale optimization algorithm and change the convergence factor of the original algorithm. The linear decrease is changed to the nonlinear decrease of the cosine form, which balances the global search and local search capabilities, and adds the inertial weight of the synchronous cosine form to improve the optimization accuracy and speed up the search speed. The improved whale optimization algorithm solves the heterogeneous wireless sensor network coverage optimization model and obtains the optimal coverage scheme. Simulation experiments show that the proposed method can effectively improve the network coverage effect, as well as the utilization rate of nodes, and reduce network cost consumption.
Introduction
Heterogeneous wireless sensor networks (HWSNs) are a network technology that integrates wireless communication, sensors, embedded computing, and distributed information processing. It is widely used due to its flexible deployment and low cost. 1,2 Usually, wireless sensor networks (WSNs) are composed of a large number of tiny sensor nodes, and these nodes may have some different characteristics. Even the same kind of sensor nodes may show different characteristics due to hardware failure problems. Therefore, the object of this article is the partial coverage of heterogeneous WSNs. 3 In heterogeneous WSNs, sensor nodes have different sensing and communication ranges. The purpose of deploying HWSNs in the monitoring area is to monitor abnormal conditions in the target area, such as forest fire detection. This requires the monitoring area to be covered by nodes or to meet the coverage requirements of the monitoring area. If there is a coverage hole area or the coverage requirement cannot be met, the abnormal situation may be missed. 4 Coverage requirements refer to different application environments and different requirements for the coverage area of the monitoring area. Because some applications do not require 100% coverage (full coverage), only partial coverage is required, that is, partial coverage control has always been an important issue for HWSNs, which reflects the detection and tracking status of a wireless sensor network area. Coverage control can reasonably allocate network resources, thereby optimizing network coverage performance. 5 In a heterogeneous sensor network, a single node has simple functions and limited energy supply. It is necessary to optimize the network deployment for tasks and sensor characteristics. 6 The purpose of network coverage distribution research is how to effectively distribute sensors so that all points in the area are located within the sensing range of the sensor network. Traditional methods are still innovating, such as the energy efficiency and coordination mechanism of k-fold coverage hole detection in sensor networks, and the sleep mechanism of network nodes based on learning. 7 At the same time, emerging algorithms based on biological heuristics and evolutionary algorithms have received a lot of research, and a series of results have been achieved. Such as artificial bee colony algorithm based on the Voronoi diagram, artificial fish swarm algorithm, firefly optimization algorithm, particle swarm optimization (PSO) algorithm, genetic algorithm, and other methods of heterogeneous wireless sensor coverage deployment strategies. 8 Based on this, a coverage optimization algorithm based on an improved whale optimization algorithm (WOA) for HWSNs is proposed in this article, increases the coverage of the network, reduces the network energy consumption, and prolongs the lifetime of the network.
Related work
At present, there are many methods to solve the coverage optimization problem of HWSNs, and the more classic one is to use virtual force to optimize it. Combined with the Voronoi diagram model, research and propose a new algorithm that can be applied to the optimization of sensor network coverage. This article first proposes a sensor network coverage optimization algorithm based on the outer neighboring polygon of the Voronoi diagram. After setting up the virtual sensor node set at the periphery of the area according to the agreed rules, the Voronoi diagram is used to divide the real sensor nodes in the area and the virtual sensor nodes at the periphery of the area together, and define the outer neighboring envelope polygon. Aiming at efficiently shutting off redundant sensors and enhancing coverage ratio, the authors present a virtual centripetal force-based coverage-enhancing algorithm for wireless multimedia sensor networks (WMSNs). 9 These holes are obtained using a Voronoi diagram for the case of sensors with the same sensing ranges, and a multiplicatively weighted Voronoi (MW-Voronoi) diagram for the case of sensors with different sensing ranges. 10 In order to improve the coverage effect of wireless sensor networks, a network coverage algorithm based on evidence theory is proposed in Wang and Guo. 11 In order to effectively improve the coverage of a wireless sensor network in the monitoring area, a coverage optimization algorithm for wireless sensor networks with a Virtual Force-Le´vy-embedded Grey Wolf Optimization (VFLGWO) algorithm is proposed in Karimi-Bidhendi et al. 12 The swarm intelligence algorithm proposes new ideas to solve the coverage optimization problem of HWSNs. In recent years, a large number of scholars have applied swarm intelligence algorithms to the coverage control of HWSNs and studied its performance. Alia and Al-Ajouri 13 proposed the introduction of a harmony search algorithm in wireless sensor networks to optimize nodes. Although the algorithm has strong parallel search capabilities, it converges slowly near the optimal solution, which makes it difficult to meet the real-time requirements of dynamic nodes. Du et al. 14 proposed a PSO-based wireless sensor network coverage optimization algorithm, which can effectively achieve wireless sensor network coverage optimization. The disadvantage is that the PSO algorithm is easy to fall into local extreme points, which limits the search range of particles. Feng et al. 15 proposed to combine K-means clustering with artificial fish school algorithm (AFSA) to improve the coverage of the network. This method can effectively avoid the algorithm from falling into premature and accelerate the convergence of the algorithm, but there are insufficient considerations for the random deployment of nodes, the perception blind zone, and the overlap zone. Duan et al. 16 proposed to apply the improved ant colony algorithm to network node coverage optimization. Although it increases the algorithm's strong local search ability, it does not consider actual environmental factors to a certain extent, which affects the real-time performance of network coverage optimization.
There are also some other solutions to the coverage problem of heterogeneous sensor networks, such as the research studies. [17][18][19][20] When there are still problems such as high computational complexity, poor real-time performance, and slower convergence speed of the algorithm. In response to this problem, the Sine and Cosine Algorithm (SCA) is used to improve the WOA. Change the convergence factor of the original algorithm from linear decline to nonlinear decline in the form of cosines, balance the capabilities of global search and local search, and add inertial weights in the form of synchronous cosines to improve optimization accuracy and speed up search speed. A coverage optimization algorithm for HWSNs based on Sine-Cosine Algorithm Optimization Whale Algorithm (SCA-WOA) is proposed. The improved WOA solves the HWSNs coverage optimization model to obtain the optimal coverage plan and improve the network coverage.
Mathematical model
Assuming that the monitoring area is a limited twodimensional plane, an appropriate number of sensor nodes is placed in the area to achieve the complete coverage of the area. In practical applications, the complete coverage of the monitoring area does not need to be achieved, and the deployment of a large number of nodes will impose unnecessary costs. Generally, only incomplete area coverage and a limited coverage rate are required for a specific area. Under the minimum cost, an appropriate number of nodes are deployed to achieve the coverage control of the network. Alternatively, under a certain cost mechanism, a limited number of nodes are deployed to achieve the optimal network coverage.
In this research, the probability-aware model is used to calculate the coverage rate of the network. Each sensor node in the HWSN takes itself as the sensing coverage center, and has a circular area with a fixed communication radius. Therefore, it is difficult for all the sensor nodes to solve the total coverage of the monitoring area via mathematical equations. To simplify the coverage problem in WSNs, the area to be monitored can be discretized into m 3 n pixels. Assuming that x pixels are covered by WSN, the coverage can be expressed as x/(m 3 n).
Suppose that the measurement radius r of each sensor node in a WSN is the same as the communication radius r s , and the coverage area of each sensor node is a circular area with radius r. In this work, it is assumed that the measured area of the sensor network is a twodimensional plane M, which is discretized into m 3 n pixels. There are N sensor nodes in the WSNs. The set of sensor nodes in the measured area is G = {g 1 , g 2 , ..., g N }, and the position of the ith sensor node g i is (x i , y i ). Assuming that the coordinates of the pixel H are (x H , y H ), then the distance between the pixel and the sensor node of g i is as follows Using a two-dimensional perception model, the probability of the sensor node g i sensing pixel H is Assuming that any one sensor node can be sensed by multiple sensor nodes at the same time, the joint probability that the sensor node at pixel H is sensed by the node set G of wireless sensor network is The coverage rate l of all the sensor nodes to be detected is In addition, assuming that the network nodes work efficiently u is In equation (5), the parameter S 1 is the total number of sensor nodes, and the parameter S 2 is the number of effective working sensor nodes. Taking into account the energy balance of the network, the definition of energy balance coefficient h is introduced, specifically, the parameter E i represents the remaining energy of the node i, and the parameter k represents the number of the active nodes The parameter h reflects the equilibrium degree of the network energy consumption. The larger the value, the more uneven the energy consumption. On the contrary, the more uniform the energy consumption.
Since the coverage of WSNs is optimized to integrate the number of the working nodes, the coverage rate, and the energy balance, on the basis of making the network coverage rate meet the actual application requirements, as many redundant nodes as possible go to sleep state, thereby saving the energy consumption. Therefore, the coverage optimization objective mathematical model f of WSNs is In equation (7), the parameters v 1 , v 2 , and v 3 are the weight coefficients, v 1 + v 2 + v 3 = 1.
The optimization goal of the network coverage model of HWSNs is the maximum value of the coverage function in equation (7).
WOA
The WOA simulates the hunting behavior of humpback whales. Each solution to the problem is regarded as a whale, and each whale uses a random exploration mechanism to search for prey when hunting. 21 After the prey is found, it uses contraction envelopment and spiral bubble net attack to launch an attack. In the WOA algorithm, inspired by bionics, the iconic hunting methods of whales are modeled as the processes of encirclement, predation, and random search. 22 Encirclement process. In the process of optimizing the function of the algorithm, the position of each individual represents a solution searched by the algorithm in space. When performing optimization tasks, in order to accurately find the location of the optimal solution, each individual generated in the algorithm begins to explore the area near the initial location. 23 Assuming that the individual with the smallest fitness value in the current population is the target prey, other whales update their positions according to this position. The mathematical model at this stage is as follows 24 wherein, the parameter X(t) represents the position of each individual at the tth generation, X*(t) represents the global optimal position at the tth generation, and the parameter t represents the current iteration number, where the parameters A and C are as follows Among them, the parameters r 1 and r 2 represent the random numbers that obey the distribution of [0, 1], and the parameter a represents an adjustment parameter that decreases from 2 to 0 as the number of iterations increases. 25 During the predation process, the humpback whale bubble net predation method is to narrow the encircling circle while spiraling. In the mathematical modeling, the shrinking orbiting mechanism of a school of whales is simulated by changing the parameter a. The value range of the parameter A in the algorithm is between [2a, a]. When the value of A is between [21,1], the position X(t + 1) of each individual at generation t + 1 is the position X(t) and tth generation when they are in the tth generation. Based on this idea, the goal of surrounding prey is achieved between the global optimal position X*(t) in the generation time. The mathematical model is expressed as In formula (13), D p = jX Ã (t) À X (t)j, it represents the distance between each individual and the optimal solution in the tth generation, and the parameter b represents the constant in the spiral travel equation of the swarm of whales, and the value is 1. 26 The parameter l is a random number, and the value range is [21,1]. When each individual whale swims toward the target, it adopts two strategies: shrinking the encircling circle and spiraling forward. In order to make these two methods go on at the same time, the probability of choosing two travel methods when performing optimization tasks is set in the modeling is 50%, and the mathematical model is expressed as In formula (14), the parameter p is a random number and obeys the distribution of [0, 1].
Prey search process. In the process of searching for the optimal solution, when the value of the parameter A in the algorithm is |A| ø 1, the position update of each individual is realized by relying on each other's position. This update strategy makes it possible for each individual to stay away from the location of the current optimal solution. If the algorithm falls into the local optimal solution, this method improves the probability of the algorithm jumping out of the local optimal area to a certain extent. 27 The mathematical model is as follows wherein, the parameter X rand represents the random individual position in the population at the tth generation.
Improved WOA
Compared with other intelligent algorithms, the whale algorithm has many advantages, but the basic whale algorithm has problems such as slow convergence speed and local optimal solutions when dealing with high-dimensional complex problems. Therefore, some improved whale algorithms are proposed in the field of algorithm optimization. In this article, an improved whale algorithm is applied to solve the optimal coverage problem of HWSNs. The convergence factor a in the whale algorithm shows a linear convergence trend in the iterative process, which does not conform to the actual iterative search process of whales. The algorithm needs to avoid the emergence of premature problems when dealing with high-dimensional complex problems. In order to balance the global search and local search capabilities, this article uses the cosine form of decreasing method, the specific expression is as follows Among them, the parameter t is the current iteration number, and the parameter T max is the maximum iteration number. When the inertia weight value is large, the global search ability is strong, and the inertia weight value is small, the local search ability is strong. Therefore, this article draws on the cosine change of the previous convergence factor a and applies a new nonlinear inertia weight. As the number of iterations increases, it dynamically adjusts the global search and local search capabilities while accelerating the algorithm's convergence speed and improving the accuracy of the algorithm's optimization The expression of the weight The initial t value of the number of iterations is small, the weight v is large, and the adjustment step of the algorithm is also large. The whale can search for the optimal solution in a large space. As the number of iterations t increases, the weight v becomes smaller and smaller, and the algorithm adjustment step size also becomes smaller. At this time, the whale performs a fine search in the optimal solution neighborhood space. The parameter v changes adaptively as the number of iterations of the current whale group changes, which improves the convergence accuracy of the whale algorithm and speeds up the convergence speed.
Application of SCA-WOA algorithm in optimal coverage of HWSNs
This article designs the coverage optimization objective of HWSNs based on the improved SCA-WOA algorithm. That is, the maximum value of the objective function of the coverage optimization of the HWSN is solved, based on the coverage rate optimized by the SCA-WOA algorithm, and the distribution position of all the sensor nodes in the area to be tested after the optimized deployment is obtained. 28 The coverage optimization steps of HWSNs are as follows: Step 1. Initialize the heterogeneous sensor network coverage optimization system, randomly generate N sensor initial positions and initial energy, the population size of the whale algorithm is n, and set other parameter vectors A and C, the convergence factor a, and the maximum number of iterations T max . And the vector of the whale's moving position.
Step 2. Calculate the fitness values of all whales according to the fitness function, keep the positions of the whales with high fitness values, and search for the whales with low fitness values in the direction of their prey. The objective function is the fitness function. The objective function in this article is the maximum coverage of HWSNs.
Step 3. Calculate the fitness of each whale from the objective function f(x), and save the optimal value.
Step 4. Update the location information of the whales by searching, encircling, and attacking their prey, so that the whales are approaching to prey in the direction of the maximum power point.
Step 5. Generate uniformly distributed WOA and other parameters through equation (13), update parameters a, A, and C at the same time, and update parameters through equation (14).
Step 6. Compare the size of A, compare the probability factor value with 0.5, and select the corresponding location update formula. Update the current position according to the spiral mechanism of equation (16).
Step 7. Update the saved optimal position X * according to formulas (18) and (19). Combining the sine and cosine algorithm with the WOA to screen the position of the leader, to a certain extent, avoids the defect that the algorithm is prone to premature maturity. This method retains the superiority of the WOA, while balancing the algorithm's global detection and local optimization capabilities. Decide whether to update the global optimal position. 29 Step 8. Determine whether the algorithm satisfies the stop loop condition. If it does, it will jump out of the main loop and output the target position and the optimal fitness function value. Otherwise, return to Step 3 to recalculate.
The flowchart of HWSNs coverage optimization based on SCA-WOA algorithm is shown in Figure 1.
Comparison and analysis of algorithm simulation
Based on the MATLAB 2017a simulation environment, this article solves the proposed HWSNs heterogeneous node deployment and coverage optimization problem based on the SCA-WOA. Performance comparisons are made with particle swarm PSO algorithm, AFSA, and basic WOA. The four algorithms are tested on HWSNs' coverage effect, coverage rate, node remaining energy, and algorithm simulation time-consuming indicators. The population size is set to 50, and the maximum number of iterations is set to 50.
Function objective optimization
In order to reflect the superior performance of the SCA-WOA algorithm proposed in this section, mainly, from the two aspects of the algorithm's convergence speed and optimization accuracy, we give five test functions for algorithm experiment comparison. The mathematical formula, dimension, and boundary range of the standard test function are shown in Table 1.
The four algorithms of PSO, AFSA, WOA, and SCA-WOA are used to test and compare the functions. The five test functions are all very classic and commonly used functions in the performance test of the swarm intelligence optimization algorithm. The experimental results are shown in Table 2.
It can be seen from the results of the function of the four algorithms to solve the optimal value (seeking the minimum value of the function) that the accuracy of the order of magnitude of the function optimal solution of the four algorithms is different. On the whole, the PSO algorithm has the lowest accuracy, the AFSA algorithm has a lower accuracy, and the WOA algorithm has a higher accuracy. The algorithm proposed in this article has the highest accuracy of the minimum value of the solution function. Taking the test function F5 as an example, the improved SCA-WOA algorithm is two orders of magnitude higher than the algorithm calculated by the basic WOA algorithm. From the overall test results, the optimization effect of the PSO optimization algorithm is the worst. It can be seen that the performance of the improved SCA-WOA algorithm has obvious competitive advantages.
Coverage effect comparison
First, in the monitoring environment, assume that the area of the environmental area that needs to be monitored is 100 3 100 m 2 , and 50 sensor nodes are randomly deployed here. Including heterogeneous nodes with different perception radius, the perception radius range is [5,20], which are some random numbers. The initial energy of the node is 1 J, and the energy of the heterogeneous node is 3 J. The simulation software uses MATLAB R2017a.
Comparison of coverage effects. In order to compare the coverage performance of the four coverage algorithms of PSO, AFSA, WOA, and the algorithm proposed in this article, the average results of 50 experiments were performed under the same simulation conditions. First, randomly generate the position of the sensor node in the monitoring area. In the figure, a square frame represents the covered monitoring area, '''' represents the location of the sensor node, and the circle represents the coverage area of different sizes of heterogeneous sensor nodes. When the sensor nodes of HWSNs are randomly deployed, the four different algorithms are from 50, 100, 500, and 1000 iterations, the number of iterations increases sequentially, and the network coverage rate increases significantly. When the number of iterations increases, the network coverage rate has been greatly improved. Taken together, the proposed SCA-WOA algorithm has improved coverage compared with PSO, AFSA, and WOA algorithms. From the perspective of the overlay effect of the heterogeneous sensor nodes, the nodes of the PSO algorithm are very obvious in the 50 iterations, and a large area is not covered, and the effect is very poor. At 1000 iterations, there are still some overlays and blank areas, and the coverage effect of the PSO algorithm is not ideal. The AFSA algorithm has a small amount of overlay and a small amount of blank areas when deployed at 50 iterations. At 1000 iterations, the final coverage effect is relatively good. However, there are many overlapping areas of coverage between heterogeneous nodes, causing some waste of coverage nodes. The basic WOA algorithm also has large coverage overlap and large coverage gaps in heterogeneous nodes in 50 iterations. After 1000 iterations, the final coverage effect still has some areas that are not covered, and the coverage overlap between nodes still exists. But overall, the coverage effect is better than the performance of the previous two algorithms. The SCA-WOA algorithm proposed in this article has obvious node aggregation at 50 iterations, but the network coverage blank area is the smallest. At 1000 iterations, the final coverage effect is the best, and the coverage redundancy area between heterogeneous nodes is the smallest. The coverage of white space in the network is also the smallest. On the whole, the algorithm proposed > : [250,50] 0 [25.12,5.12] 0 in this article has the best coverage and the fastest algorithm convergence speed.
Comparison of coverage rate. Network coverage is a key indicator considered by HWSNs. As the number of iterations increases, the HWSNs' coverage comparison between the basic PSO, AFSA, WOA, and the proposed SCA-WOA algorithm is shown in Figure 6. The network coverage of the four algorithms is gradually increasing with the increase in the number of iterations. Especially when the first 1-100 iterations are calculated, the coverage of the four algorithms increases the most. From the 100-400 iterations, the coverage rate of the four algorithms is gradually increasing, slowly rising. When the iteration reaches 500 times, the PSO algorithm basically reaches the maximum value and does not increase, the basic AFSA algorithm is still slowly improving, the WOA algorithm is basically unchanged, and the algorithm mentioned in this section has the largest increase. When iterated to 1000 times, the coverage rate of the SCA-WOA algorithm proposed in this article reached 96.8%, the WOA algorithm reached 94%, the AFSA algorithm reached 93%, and the PSO algorithm only reached 91%. On the whole, the proposed algorithm has the highest coverage and the best performance.
Comparison of network coverage under different number of nodes. In order to increase the superior performance of the proposed SCA-WOA algorithm, we have increased It can be seen from Figure 7 that when the number of heterogeneous sensor nodes is increased, the coverage of each algorithm is significantly improved. On the whole, the SCA-WOA algorithm proposed in this article has the highest coverage. The basic WOA algorithm has a higher network coverage rate, the AFSA algorithm has a lower network coverage rate, and the basic PSO algorithm has the lowest network coverage rate. Comparing the growth rates of the four algorithms respectively, the algorithm proposed in this article has the fastest growth rate, the WOA algorithm has a faster growth rate, the AFSA algorithm has a slower growth rate, and the PSO algorithm has the slowest growth rate.
Comparison of network connectivity. The network connectivity of HWSNs is generally measured by the network connectivity rate, and the calculation of the connectivity rate is more complicated. We all know that the data transmission of heterogeneous sensor nodes is a multihop self-organization method. Our connection rate is calculated by the number of hops of data transmission, the number of hops from the source node to the destination node is counted, and the connection rate is calculated by the data of heterogeneous nodes. It is obtained by transmission traversal method. Taking the data transmission from the heterogeneous sensor node of the source node to the destination node as an example, the data from the source node are transmitted to the destination node in order to find its one-hop, twohop, and three-hop neighbor heterogeneous nodes, and the multi-hop transmission continues. Until the number of connections to the original source node does not increase. The comparison of the calculation results of the HWSNs connection rate of the four algorithms is shown in Figure 8.
The comparison of the connectivity performance of the four HWSNs fusion algorithms in Figure 8 shows that the network connectivity of the four algorithms gradually decreases with the increase in the number of simulation polls. This is mainly due to the remaining energy of the network as the simulation progresses. Gradually decline, resulting in a gradual decline in network connectivity. From the comparison of the network connectivity performance of the four algorithms, the PSO algorithm has the worst network connectivity, with an average connectivity rate of only about 0.3. The connectivity rate of the AFSA algorithm is about 0.5, the connectivity rate of the basic WOA algorithm is about 0.65, and the average coverage rate of the SCA-WOA algorithm proposed in this article is about 0.78. On the whole, the SCA-WOA coverage optimization algorithm proposed in this article has the highest network connectivity rate and is relatively stable. In the 40th round, the connectivity rate value is still 0.79, and the network connectivity performance is better.
Conclusion
This article analyzes the basic principles and shortcomings of the WOA algorithm, and proposes an improved SCA-WOA algorithm on this basis. The algorithm introduces the sine and cosine algorithm to avoid falling into the local optimum and strengthen the global search ability. At the same time, combined with the fitness value, an adaptive location adjustment strategy is proposed to speed up the convergence speed of the algorithm, and the SCA-WOA algorithm is applied to the node deployment problem of HWSNs. Experimental results show that the SCA-WOA algorithm effectively avoids falling into local optimality and speeds up the convergence speed. Compared with the basic WOA algorithm, the SCA-WOA algorithm improves the coverage rate of HWSNs by 8% after optimization, and its application adaptability strong.
Therefore, the SCA-WOA algorithm designed in this article improves the coverage performance of the HWSNs network to a certain extent, but in the application process, some regional nodes are too clustered. Future research directions should make HWSNs cover more evenly and reduce the area where nodes gather.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 6,882.4 | 2021-05-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
sensing
: Atmospheric profiles are important input parameters for atmospheric radiative transfer models and atmospheric parameter inversions. The construction of regionally representative reference atmospheric profiles can provide basic data for global atmospheric and environmental research. Most reference atmospheric profile databases commonly used lag behind in updating frequency. These databases usually have limited spatial and temporal resolution and differ greatly from the real atmospheric state. To present the real atmospheric state, this article constructs the Global Reference Atmospheric Profile Database (GRAP) based on ACE-FTS satellite products of 2021 and 2022, AIRS satellite products and ERA5 reanalysis data of 2022 u6sing a random forest regression model and a hierarchical mean algorithm. The radiance spectrum of FY-3E HIRAS-II using different profile databases was simulated and compared with the measured spectrum. The results show that GRAP spectral simulations fit better with the measured HIRAS-II spectrum. Comparing the CO 2 , CH 4 , O 3 and N 2 O profiles of GRAP, AFGL, MIPAS, RTTOV and NDACC ground station profiles in equatorial, mid-latitude summer and polar winter, the results show that GRAP has high spatial and temporal resolution and better fits the current real atmospheric state. Comparing the temperature profiles of eight regions in China, the results illustrate that GRAP is a better representation of the state of the atmosphere in the Chinese region. GRAP can provide fundamental atmospheric data for radiative transfer studies and atmospheric parameter inversions.
Introduction
Atmospheric profiles describe the state of the atmosphere and fundamentally determine its optical properties. Atmospheric profile sample datasets and reference atmospheric profile databases are widely used in research on atmospheric radiative transfer models, atmospheric parameter inversions, simulations of the spectral properties of new satellite instruments and satellite data assimilation [1][2][3][4][5]. As the global atmospheric environment changes, the data on atmospheric profiles used for model development and instrument accuracy verification needs to be continuously updated. Therefore, the construction of a regionally representative global reference atmospheric profile database is of significant importance for the atmospheric environment and global change research.
Atmospheric profile sample datasets can be used to estimate the statistical properties of the background fields. Currently, there are several versions of atmospheric profile sample datasets commonly used internationally, such as TIGR (Thermodynamic Initial Guess Retrieval) [6], ECMWF (European Centre for Medium-Range Weather Forecast) 31L-SD, 50L-SD [6,7], 60L-SD [8][9][10], NESS-35 [11,12], NOAA88 [13,14], etc. Each sample dataset contains different atmospheric profile parameters, data sources and sample sizes due to their different intended applications. The TIGR is an atmospheric profile sample dataset created by the French Laboratory for Dynamical Meteorology, and there are currently five versions available. These profiles were selected from a large number of atmospheric samples from different periods around the world using topological methods. The ECMWF used the same methods as the TIGR and created the 31L-SD, 50L-SD, 60L-SD and 91-L short-range forecast atmospheric profile sample datasets. The ECMWF creates the NOAA88 atmospheric profile sample dataset of 7547 sounding profiles and the ECMWF-52 sample dataset with 52 atmospheric temperatures, humidity and ozone profiles in two atmospheric height level formats, 60 and 101 layers. The existing studies have analyzed these atmospheric sample datasets and found that only the TIGR-43 sample dataset contains one atmospheric profile located on Dachen Island, Zhejiang Province, China [15]. The other atmospheric sample datasets generally lack atmospheric samples that are representative of the Chinese region. To address this issue, Qi Chengli used the topological sampling method to establish the CRASD-1 and CRASD-2 sample datasets with characteristics specific to the Chinese region [15,16]. Due to the large latitude span, complex topography and diverse climate of China, using a single atmospheric profile to represent the whole Chinese region is unreasonable and may cause significant errors in research and analysis. Therefore, it is crucial to improve the Chinese regional atmospheric profiles.
The reference atmospheric profile databases are primarily used for the application performance evaluation and accuracy verification of satellite detectors, radiative transfer models and atmospheric inversion method models. The databases should include meteorological parameters such as air pressure, temperature, gas composition and profile distribution. For internationally popular atmospheric radiative transfer software, LOW-TRAN [17], MODTRAN [18], LBLRTM [19], FASCODE [20] and RFM [21] are used in the six reference atmospheric profiles created by the US Air Force Geophysical Laboratory (AFGL), which are the tropical (15 • N) atmosphere, the mid-latitude summer (45 • N, July) atmosphere, the mid-latitude winter (45 • N, January) atmosphere, the sub-polar summer (60 • N, July) atmosphere, the sub-polar winter (60 • N, January) atmosphere, and the 1976 US Standard Atmosphere [22,23]. The six reference atmospheric profiles take into account the changes of atmospheric parameters with latitude and season, but their spatial-temporal distribution only represents the summer and winter of latitude zones without considering the influence of longitude on atmospheric parameters or the seasonal changes in spring and autumn. Furthermore, the reference atmospheric profiles are updated less frequently. With the intensifying global climate change, atmospheric parameters such as global temperature, CO 2 , CH 4 and O 3 have undergone significant changes compared to previous ones, and the delay in updates could cause great errors in the application of studies applying these reference atmospheric profiles.
To address the issues of the existing reference atmospheric profile databases, such as long update periods, large spatial resolution and inadequate consideration of seasonal changes in spring and autumn, this article uses the ACE-FTS Level 2 Version 4.1 products in 2021 and 2022, the AIRS Support Level 2 Version 7 products, and ERA5 reanalysis data in 2022 to create the Global Reference Atmosphere Profile Database (GRAP) through the use of a random forest regression model and a stratified mean algorithm. The objective is to provide data support for research on global climate change and atmospheric component inversion.
Data Sources
The data sources used in this article include ACE-FTS Level 2 Version 4.1 satellite products in 2021 and 2022, AIRS Support Level 2 Version 7 satellite products, and ERA5 reanalysis data in 2022. (1) ACE-FTS L2 products The ACE-FTS instrument was launched on 12 August 2003 on board the SciSat-1 satellite. It has a spectral resolution of 0.02 cm −1 , a vertical resolution of 1-2 km, a horizontal resolution of 500 km, a wavelength range of 750-4400 cm −1 (2.2-13.3 µm), and a high vertical resolution using occultation for atmospheric sounding [24]. ACE-FTS Level 2 Version 4.1 is a global dataset that includes pressure, temperature, and more than 40 atmospheric constituents such as CO 2 , CH 4 , H 2 O, O 3 and N 2 O for the period 2004 to 2023.
(2) AIRS L2 products The Atmospheric Infrared Sounder (AIRS) on the EOS Aqua Spacecraft was launched on 4 May 2002. It orbits from one pole of the Earth to the other about fifteen times a day, covering the same region of the Earth twice a day. AIRS detects wavelengths in the range 650-2700 cm −1 (3.7-15.4 µm) and has a total of 2378 spectral channels [25]. AIRS can detect vertical profiles of atmospheric temperature and humidity, as well as the trace greenhouse gases CO 2 , CH 4 , O 3 and CO.
(3) ERA5 reanalysis data ERA5 reanalysis data is the fifth generation of atmospheric reanalysis products produced by ECMWF, providing hourly data and monthly averages for many atmospheric, land surface and sea state parameters. ERA5 reanalysis data covers the time period from 1940 to the present, with daily ERA5 data updates currently 5 days behind real time. The data is stored in a globally gridded data format, GRIB and NetCDF, with a spatial resolution of 0.25 • × 0.25 • , vertical coverage from 1000 hPa to 1 hPa and a vertical resolution of 37 pressure layers [26].
Methods
In this article, considering the influence of time and space on the atmospheric state, GRAP is divided into January to December according to the month, and the globe is divided into 38 latitude zones and 14 longitude zones, each spanning 5 • in latitude and 30 • in longitude (2.5 • for 0n, 0s, 90n, 90s and 15 • for 0e, 0w, 180e, 180w). This division results in a total of 532 grids. GRAP includes two atmospheric state parameters as well as 59 atmospheric component parameters, which are detailed in Table 1. The flow chart of the methods of creating GRAP is shown in Figure 1.
Database Metadata
Information on the name, data format, spatial and temporal resolution, height stratification format and atmospheric parameters of the global reference atmospheric profile database is in Table 1.
Atmospheric Profile Samples Acquisition
The construction of GRAP necessitates meeting the demands for extensive spatial and temporal coverage as well as comprehensive atmospheric composition. Simultaneously, it is crucial to ensure that the reference atmospheric profiles accurately represent the local atmospheric conditions. The primary challenge lies in extracting realistic profile samples from a vast collection of atmospheric profiles.
Temperature, pressure and volume mixing ratio (vmr) profiles of different atmospheric constituents are derived from the ACE-FTS, AIRS and ERA5 datasets. These profiles are then aligned with a standardized global grid, considering the detection time and latitude/longitude information, resulting in a dataset of 61 atmospheric profiles covering the entire globe. The extracted atmospheric profiles encompass data from all seasons throughout the year for 365 days, providing comprehensive spatial coverage. Figure 2 illustrates the distribution of CH 4 profile samples across the global grid, spanning from January to December.
Data Quality Control
To ensure the reliability of the satellite soundings, rigorous quality control measures are implemented to identify and eliminate erroneous profiles caused by detection errors, instrument malfunctions, cloud effects, or other exceptional circumstances. These quality control procedures aim to refine the original profile samples by excluding any incomplete or erroneous values.
The AIRS data comprises the volume mixing ratio (vmr) and the quality control mark (QC) of the atmospheric constituents. QC values are assigned as follows: 0 indicates the highest quality, 1 indicates good quality, and 2 indicates unusable. AIRS data include the vmr and the quality control mark (QC) of the atmospheric constituents. Profiles with a QC mark of 2 were excluded from the AIRS data.
The ACE-FTS data consists of volume mixing ratio (vmr) and vmr errors for different gaseous atmospheric constituents. Compute the error ratio for the volume mixing ratio and eliminate profiles with an error ratio exceeding 15%.
Since temperature, pressure and atmospheric composition vary with height with a certain regularity and the difference between adjacent heights is within a certain range [27], the vertical consistency is used to test the quality of the profiles and use Equation (1) to calculate the rate of vertical change of the atmosphere for each height layer. (1) where is the rate of vertical change of the profile in the height layer of h, is the profile value in the ℎ layer and is the corresponding height layer. The standard deviation of the vertical rate of change of the profiles, , as shown in Equation (2), was calculated. Moreover, the error profiles exceeding three σ were removed.
∑
(2) where is the rate of vertical change of the profile in layer and is the average of across height layers of the profile.
Data Quality Control
To ensure the reliability of the satellite soundings, rigorous quality control measures are implemented to identify and eliminate erroneous profiles caused by detection errors, instrument malfunctions, cloud effects, or other exceptional circumstances. These quality control procedures aim to refine the original profile samples by excluding any incomplete or erroneous values.
The AIRS data comprises the volume mixing ratio (vmr) and the quality control mark (QC) of the atmospheric constituents. QC values are assigned as follows: 0 indicates the highest quality, 1 indicates good quality, and 2 indicates unusable. AIRS data include the vmr and the quality control mark (QC) of the atmospheric constituents. Profiles with a QC mark of 2 were excluded from the AIRS data.
The ACE-FTS data consists of volume mixing ratio (vmr) and vmr errors for different gaseous atmospheric constituents. Compute the error ratio for the volume mixing ratio and eliminate profiles with an error ratio exceeding 15%.
Since temperature, pressure and atmospheric composition vary with height with a certain regularity and the difference between adjacent heights is within a certain range [27], the vertical consistency is used to test the quality of the profiles and use Equation (1) to calculate the rate of vertical change of the atmosphere for each height layer.
where dx h is the rate of vertical change of the profile in the height layer of h, x h is the profile value in the h layer and H h is the corresponding height layer. The standard deviation of the vertical rate of change of the profiles, σ, as shown in Equation (2), was calculated. Moreover, the error profiles exceeding three σ were removed.
where x i is the rate of vertical change of the profile in layer i and µ is the average of x i across height layers of the profile. Figure 1, it can be seen that there are missing atmospheric profiles in some of the grids, resulting in incomplete global data. In this article, the profiles in the missing grids are interpolated using interpolation in the space-time domain to fill the data. Interpolation is performed on the time domain based on the time series, using the data closest in time to the missing data to fill in. In the spatial domain, the inverse distance weight interpolation (IDW) is used as in Equation (3) [28]. using observations around the location of the interpolation point to fill in the missing positions.Ẑ whereẐ 0 is the estimated value at the point (x 0 ,y 0 ), Q i is the estimated weight coefficient of the interpolated point corresponding to the observed point, and n denotes the number of interpolated points. Q i is shown in Equation (4) as follows: where n is the number of known observation points and f d ej denotes the weight function of the known distance d ej between the known observation points and the interpolated points. Equation (5) for f (d ej ) is as follows:
Standardization of Atmospheric Profiles
The pressure levels of the different data sources depend on the effective sounding altitude of the instrument. The ACE-FTS data provide an altitude range of 0.5 to 149.5 km, corresponding to a pressure range of 1013 to 3.22 × 10 −6 hpa, divided into 150 pressure levels. The AIRS data provide a pressure range of 1100 to 1.61 × 10 −6 hpa, divided into 100 pressure levels. The ERA5 reanalysis data provides a pressure range of 1000 to 1 hPa, divided into 37 pressure levels. In this article, all profile samples are interpolated onto a uniform elevation grid. The profiles have an elevation range of 0-119 km and a vertical interval of 1 km over the entire height range ( Table 2 gives the three data sources and the GRAP height range level) [29]. A non-linear relationship between height and sample contour values is constructed in each grid, and each profile is interpolated to a standard height grid using a spline function interpolation method. The data sources for CH 4 , CO 2 , O 3 and temperature profile samples are mainly from AIRS satellite data and ERA5 reanalysis data, which are large in number. The four atmospheric profile samples in each grid were fitted with a random forest regression model to obtain a standard profile representing that grid. Random Forest (RF) is an algorithm that uses multiple trees to train and predict a sample [30]. There are two advantages to using a random forest regression model: (1) The random forest model has a random nature in sample extraction and feature selection, and the algorithm is not prone to over-fitting; (2) When creating the random forest, an unbiased estimate of the Generalization Error is used, and the model has a strong generalization capability; The random forest regression model construction process is as follows. Figure 3 shows the Random Forest Model Construction Flowchart.
lected from the original dataset, and the samples that are not drawn (Out of Bag OBB) form the test set; (2) Construct n decision trees, select m features from the training sample data, choos the best feature to split, and keep splitting each tree until all training samples at tha node belong to the same class; (3) Repeat both steps (1) and (2), and finally form the generated multiple classificatio trees into a random forest regression model; (4) Integrate all the generated decision trees for prediction to obtain the final prediction results. Other atmospheric component profile sample data sources are mainly ACE-FTS. Th amount of data is small for all the profile samples in the same grid to adopt the stratified mean method, according to each height layer, to find the mean value of the profile sample to obtain a standard profile to represent the atmospheric parameters in this grid. Equatio (6) for the stratified mean method is as follows:
GRAP-Based Simulation Validation
This study employs the RFM atmospheric radiative transfer model to simulate th location and absorption intensity of absorption spectral lines for CO2 and CH4 in both th full band and sensitive band ranges. The simulations select the Fengyun-3E HIRAS-I sample situated at mid-to low-latitudes (19.12°N, 98.65°E) in October. The parameters o the RFM were configured to match the surface temperature, surface reflectance, observa tion geometry, and spectral resolution (0.625 cm −1 ) of the HIRAS-II transit moment sam (1) Using the Bootstrap sampling method with put-back, n samples are randomly selected from the original dataset, and the samples that are not drawn (Out of Bag, OBB) form the test set; (2) Construct n decision trees, select m features from the training sample data, choose the best feature to split, and keep splitting each tree until all training samples at that node belong to the same class; (3) Repeat both steps (1) and (2), and finally form the generated multiple classification trees into a random forest regression model; (4) Integrate all the generated decision trees for prediction to obtain the final prediction results.
Other atmospheric component profile sample data sources are mainly from ACE-FTS products. The amount of data is small for all the profile samples in the same grid to adopt the stratified mean method, according to each height layer, to find the mean value of the profile samples to obtain a standard profile to represent the atmospheric parameters in this grid. Equation (6) for the stratified mean method is as follows:
GRAP-Based Simulation Validation
This study employs the RFM atmospheric radiative transfer model to simulate the location and absorption intensity of absorption spectral lines for CO 2 and CH 4 in both the full band and sensitive band ranges. The simulations select the Fengyun-3E HIRAS-II sample situated at mid-to low-latitudes (19.12 • N, 98.65 • E) in October. The parameters of the RFM were configured to match the surface temperature, surface reflectance, observation geometry, and spectral resolution (0.625 cm −1 ) of the HIRAS-II transit moment sampling image element. The spectrum data from HIRAS-II was captured on 10 October 2022, at 11:40 AM, with a spatial resolution of 14 km. The file name associated with the data is FY3E_HIRAS_GRAN_L1_20221010_1140_014KM_V0.HDF. Figure 4 presents the comparison between the spectrum data obtained from HIRAS-II and the simulated spectrum of the RFM in the three atmospheric models across the full band range of 650-2550 cm −1 . The results in Figure 4 indicate that the simulated spectrum of GRAP exhibits a much closer agreement with the measured spectrum from HIRAS-II. Figure 4 presents the comparison between the spectrum data obtained from HIRA II and the simulated spectrum of the RFM in the three atmospheric models across the f band range of 650-2550 cm −1 . The results in Figure 4 indicate that the simulated spectru of GRAP exhibits a much closer agreement with the measured spectrum from HIRAS- The absorption characteristics of different gases vary across different spectral ban Figure 5 illustrates the spectrum within the range of 650-760 cm −1 , which corresponds the strong absorption band of CO2. This band is influenced by interfering gases such H2O, N2O, O3 and HNO3. Similarly, Figure 6 presents the spectrum within the range 1200-1400 cm −1 , which represents the strong absorption band of CH4. This band is affec by interfering gases such as H2O, N2O, CO2, CF4 and O3. By comparing the four spectru curves in Figures 7 and 8, it is evident that the measured HIRAS spectrum (represen by the red solid line) closely aligns with the simulated GRAP spectrum (represented the green dashed line). The deviations between the three simulated spectrum curves a the HIRAS-II measured spectrum curves are calculated. Figure 7 presents the GRAP si ulated CO2 absorption band spectrum within −10% to 12%, and Figure 8 presents GRAP simulated CH4 absorption band spectrum within −30% to 25%. The simulated sp trum of GRAP exhibits smaller deviations compared to the simulated spectrum of AF and MIPAS. This finding indicates that the atmospheric profile values employed in RFM model within GRAP exhibit better consistency with the true values of the curr atmospheric state. The absorption characteristics of different gases vary across different spectral bands. Figure 5 illustrates the spectrum within the range of 650-760 cm −1 , which corresponds to the strong absorption band of CO 2 . This band is influenced by interfering gases such as H 2 O, N 2 O, O 3 and HNO 3 . Similarly, Figure 6 presents the spectrum within the range of 1200-1400 cm −1 , which represents the strong absorption band of CH 4 . This band is affected by interfering gases such as H 2 O, N 2 O, CO 2 , CF 4 and O 3 . By comparing the four spectrum curves in Figures 7 and 8, it is evident that the measured HIRAS spectrum (represented by the red solid line) closely aligns with the simulated GRAP spectrum (represented by the green dashed line). The deviations between the three simulated spectrum curves and the HIRAS-II measured spectrum curves are calculated. Figure 7 presents the GRAP simulated CO 2 absorption band spectrum within −10% to 12%, and Figure 8 presents the GRAP simulated CH 4 absorption band spectrum within −30% to 25%. The simulated spectrum of GRAP exhibits smaller deviations compared to the simulated spectrum of AFGL and MIPAS. This finding indicates that the atmospheric profile values employed in the RFM model within GRAP exhibit better consistency with the true values of the current atmospheric state.
Comparison of Reference Profiles and Discussion
This article compares and analyzes the profiles of CO2, CH4, O3 and N2O in differe latitude zones in the GRAP database, the AFGL database [22], the MIPAS database [3 the RTTOV database [32], and the profiles of CH4, O3 and N2O of the NDACC grou station in different latitude zones. It further selects and analyzes the temperature referen profiles of eight different grids in the Chinese region.
Comparison of Equatorial Reference Profiles
This study compares various atmospheric profiles in the equatorial climate zone, i cluding the reference profiles of GRAP for the 0n0e grid in July, the equatorial referen
Comparison of Reference Profiles and Discussion
This article compares and analyzes the profiles of CO2, CH4, O3 and N2O in differe latitude zones in the GRAP database, the AFGL database [22], the MIPAS database [31 the RTTOV database [32], and the profiles of CH4, O3 and N2O of the NDACC groun station in different latitude zones. It further selects and analyzes the temperature referen profiles of eight different grids in the Chinese region.
Comparison of Equatorial Reference Profiles
This study compares various atmospheric profiles in the equatorial climate zone, i cluding the reference profiles of GRAP for the 0n0e grid in July, the equatorial referen profiles of the AFGL, the MIPAS equatorial reference profiles, the RTTOV reference pr
Comparison of Reference Profiles and Discussion
This article compares and analyzes the profiles of CO 2 , CH 4 , O 3 and N 2 O in different latitude zones in the GRAP database, the AFGL database [22], the MIPAS database [31], the RTTOV database [32], and the profiles of CH 4 , O 3 and N 2 O of the NDACC ground station in different latitude zones. It further selects and analyzes the temperature reference profiles of eight different grids in the Chinese region.
Comparison of Equatorial Reference Profiles
This study compares various atmospheric profiles in the equatorial climate zone, including the reference profiles of GRAP for the 0n0e grid in July, the equatorial reference profiles of the AFGL, the MIPAS equatorial reference profiles, the RTTOV reference profiles, and the profiles measured in July 2021 at the Izaña ground station in Tenerife, Spain, which is located at the equator and affiliated with the Network for the Detection of Atmospheric Composition Change (NDACC).
The CO 2 profiles in Figure 9a exhibit overall consistency among the four databases. However, below 80 km altitude, the CO 2 profile values of AFGL are 330 ppmv, whereas the CO 2 profile values of MIPAS, RTTOV and GRAP are approximately 370 ppmv, 400 ppmv and 418 ppmv, respectively. There is a significant degree of difference between the four reference profiles. Between 80 and 120 km, the three reference CO 2 profiles of GRAP, MIPAS and AFGL experience a rapid decrease, whereas the CO 2 profile of GRAP remains higher than the other two profiles. Figure 9b presents the comparison of the CH 4 profiles, showing that the shape of the GRAP CH 4 profile resembles the other three profiles. However, its values are consistently higher from 0 to 120 km, peaking at approximately 1.95 ppmv in the troposphere. The primary focus of CH 4 is in the troposphere, and the comparison between the four profiles and the observed profiles from the NDACC ground station reveals the smallest difference between the GRAP profile and the NDACC observations in the troposphere. Figure 9c indicates minimal differences in O 3 profile values between GRAP and MIPAS, AFGL, NDACC, with the peak O 3 concentrations occurring at 28-32 km. In contrast, the differences between the RTTOV reference atmospheric profiles and the other three reference profiles are more pronounced. The CO2 profiles in Figure 9a exhibit overall consistency among the four databases. However, below 80 km altitude, the CO2 profile values of AFGL are 330 ppmv, whereas the CO2 profile values of MIPAS, RTTOV and GRAP are approximately 370 ppmv, 400 ppmv and 418 ppmv, respectively. There is a significant degree of difference between the four reference profiles. Between 80 and 120 km, the three reference CO2 profiles of GRAP, MIPAS and AFGL experience a rapid decrease, whereas the CO2 profile of GRAP remains higher than the other two profiles. Figure 9b presents the comparison of the CH4 profiles, showing that the shape of the GRAP CH4 profile resembles the other three profiles. However, its values are consistently higher from 0 to 120 km, peaking at approximately 1.95 ppmv in the troposphere. The primary focus of CH4 is in the troposphere, and the comparison between the four profiles and the observed profiles from the NDACC ground station reveals the smallest difference between the GRAP profile and the NDACC observations in the troposphere. Figure 9c indicates minimal differences in O3 profile values between GRAP and MIPAS, AFGL, NDACC, with the peak O3 concentrations occurring at 28-32 km. In contrast, the differences between the RTTOV reference atmospheric profiles and the other three reference profiles are more pronounced. Figure 9d illustrates that the N2O profile values of GRAP, RTTOV and NDACC are slightly larger than those of AFGL and MIPAS within the 0-15 km range, while MIPAS values are larger within the 15-45 km range, with N2O concentration reaching approximately zero above 50 km. The four atmospheric profiles were utilized to calculate their corresponding total column density, and the resulting values are presented in Table 3. According to the WMO Greenhouse Gas Bulletin (No. 18, 2022) [33], published by the World Meteorological Organization (WMO), annual average global atmospheric concentrations of major greenhouse The four atmospheric profiles were utilized to calculate their corresponding total column density, and the resulting values are presented in Table 3. According to the WMO Greenhouse Gas Bulletin (No. 18, 2022) [33], published by the World Meteorological Organization (WMO), annual average global atmospheric concentrations of major greenhouse gases reached new highs in 2021. These are 415.7 ± 0.2 ppmv for CO 2 , 1.908 ± 0.002 ppmv for CH 4 and 0.3345 ± 0.0001 ppmv for N 2 O, which are 149%, 262% and 124% of pre-industrial (pre-1750) levels, respectively. The total column density measurements of atmospheric constituents at the equatorial Ascension Island ground station from the World Data Centre for Greenhouse Gases (WDCGG) were selected for validation, yielding values of 415.53 ppmv for CO 2 , 1.868 ppmv for CH 4 , and 0.334 ppmv for N 2 O. In summary, the atmospheric parameter profile values of GRAP demonstrate closer agreement with the current atmospheric state compared to AFGL, RTTOV and MIPAS.
Comparison of Reference Northern Hemisphere Mid-Latitude Summer Profiles
This study compares atmospheric profiles for summer in the mid-latitude climatic zone. The selected profiles include reference profiles of GRAP for July located in the US region (40n90w) and China region (40n120e), mid-latitude summer reference profiles of the AFGL, mid-latitude daytime reference profiles of the MIPAS, the RTTOV reference profiles, and NDACC measured profiles for July 2021 at the Boulder ground station site in Boulder, CO, United States. Figure 10a illustrates the comparison of CO 2 profiles, revealing the oscillation of CO 2 profile values of GRAP in the 18-75 km altitude range in both the China and US regions, with a maximum difference of 15 ppmv. However, above 75 km, the CO 2 values are lower in the US region compared to the China region. Figure 10b presents the comparison of CH 4 profiles. Between 0 and 20 km, there is a slight difference between the CH 4 profiles of GRAP in the US and China regions. The profile values of GRAP demonstrate closer agreement with the measured values at the NDACC ground station than the other three reference profiles. The CH 4 profile values of RTTOV significantly exceed those of the other three atmospheric models at 20-55 km. Above 55 km, the CH 4 profile of GRAP remains relatively constant and is notably higher than the other three atmospheric profiles. The total column density values for the six atmospheric constituents are presented in Table 4. According to the China Greenhouse Gas Bulletin No. 11 report [34], observations from the China Meteorological Administration's Waliguan National Atmospheric Background Station in 2021 indicate annual average atmospheric concentrations of CO2, CH4 and N2O as 417.0 ± 0.2 ppmv, 1.965 ± 0.0006 ppmv, and 0.3351 ± 0.0001 ppmv, respectively. These values are comparable to the same period in the northern hemisphere mid-latitudes, although slightly higher than the global mean. It is important to note that, despite being in the same latitudinal zones, the atmospheric conditions can vary across different longitude zones. In terms of data source, the GRAP is more recent, ensuring that the atmospheric parameter profiles align more closely with current atmospheric conditions when compared to the other three reference profiles. The total column density values for the six atmospheric constituents are presented in Table 4. According to the China Greenhouse Gas Bulletin No. 11 report [34], observations from the China Meteorological Administration's Waliguan National Atmospheric Background Station in 2021 indicate annual average atmospheric concentrations of CO 2 , CH 4 and N 2 O as 417.0 ± 0.2 ppmv, 1.965 ± 0.0006 ppmv, and 0.3351 ± 0.0001 ppmv, respectively. These values are comparable to the same period in the northern hemisphere mid-latitudes, although slightly higher than the global mean. It is important to note that, despite being in the same latitudinal zones, the atmospheric conditions can vary across different longitude zones. In terms of data source, the GRAP is more recent, ensuring that the atmospheric parameter profiles align more closely with current atmospheric conditions when compared to the other three reference profiles.
Comparison of Reference Polar Winter Profiles
This study compares reference profiles for the polar winter climatic zone. The selected profiles for comparison include the reference atmospheric profiles of the 90n30e grid of GRAP in January, the reference polar winter profiles of the AFGL, the reference polar winter profiles of the MIPAS, the RTTOV reference profiles, and the profiles measured in March 2022 at the NDACC ground station in Ny Ålesund, Norway, which is located in the polar regions. Figure 11a presents the comparison of the CO 2 reference profiles, which exhibit a similar trend to the equatorial and mid-latitude regions. Figure 11b presents the comparison of the CH 4 reference profiles. The CH 4 profile of GRAP exhibits higher values compared to the other three reference profiles, with a concentration peak at 15 km reaching nearly 2 ppmv. Furthermore, the GRAP values in the troposphere closely align with the measured profile values in the NDACC. Figure 11c presents the comparison of the O 3 profiles. The GRAP profile represents the lowest O 3 concentration, reaching as low as 4.7 ppmv, while the RTTOV profile exhibits the highest O 3 concentration, reaching up to 7.2 ppmv. The O 3 profile concentration from AFGL is in closer agreement with the measured profile from NDACC. Figure 11d illustrates the comparison of the N 2 O profiles. The trends in the five N 2 O profiles align closely with the equatorial and mid-latitude regions, with the N 2 O profile values of the GRAP and NDACC measured profiles being highly similar.
Comparison of Reference Polar Winter Profiles
This study compares reference profiles for the polar winter climatic zone. The selected profiles for comparison include the reference atmospheric profiles of the 90n30e grid of GRAP in January, the reference polar winter profiles of the AFGL, the reference polar winter profiles of the MIPAS, the RTTOV reference profiles, and the profiles measured in March 2022 at the NDACC ground station in Ny Ålesund, Norway, which is located in the polar regions. Figure 11a presents the comparison of the CO2 reference profiles, which exhibit a similar trend to the equatorial and mid-latitude regions. Figure 11b presents the comparison of the CH4 reference profiles. The CH4 profile of GRAP exhibits higher values compared to the other three reference profiles, with a concentration peak at 15 km reaching nearly 2 ppmv. Furthermore, the GRAP values in the troposphere closely align with the measured profile values in the NDACC. Figure 11c presents the comparison of the O3 profiles. The GRAP profile represents the lowest O3 concentration, reaching as low as 4.7 ppmv, while the RTTOV profile exhibits the highest O3 concentration, reaching up to 7.2 ppmv. The O3 profile concentration from AFGL is in closer agreement with the measured profile from NDACC. Figure 11d illustrates the comparison of the N2O profiles. The trends in the five N2O profiles align closely with the equatorial and mid-latitude regions, with the N2O profile values of the GRAP and NDACC measured profiles being highly similar. Table 5 presents the characteristics of the selected Chinese regional reference atmospheric temperature profiles. Temporally, representative months for each season, namely Table 5 presents the characteristics of the selected Chinese regional reference atmospheric temperature profiles. Temporally, representative months for each season, namely January for winter, April for spring, July for summer, and October for autumn, were chosen in the Chinese region. Spatially, the profiles encompass various regions in China, including the northern, central, southern, northeastern, northwestern and southwestern parts of the country. Figure 12 presents the comparison of reference temperature profiles for eight regions in China. Analysis of the figure reveals variations in surface temperatures (0 km) among all regions, with a maximum difference of 30 K occurring in winter, a minimum difference of only 15 K in summer, and approximately 20 K differences in both spring and autumn. In the troposphere, temperatures decrease with altitude, exhibiting the smallest variations in summer and a nearly parallel decreasing temperature profile. However, during spring, autumn and winter, a significant inflection point is observed around 10 km in Harbin, Urumqi, Beijing and Shanghai, where the rate of temperature decrease diminishes and even shows a rising trend. In the stratosphere, temperatures increase with altitude, displaying substantial differences between summer and winter. Harbin, Urumqi and Beijing exhibit significantly higher temperatures compared to the other five regions, while spring and autumn temperatures exhibit less disparity. In the mesosphere, temperatures decrease with altitude, reaching values of 160-190 K at the mesosphere's upper boundary. Finally, in the thermosphere, temperatures rise rapidly with altitude, peaking at 440 K. sen in the Chinese region. Spatially, the profiles encompass various regions in China, including the northern, central, southern, northeastern, northwestern and southwestern parts of the country. Figure 12 presents the comparison of reference temperature profiles for eight regions in China. Analysis of the figure reveals variations in surface temperatures (0 km) among all regions, with a maximum difference of 30 K occurring in winter, a minimum difference of only 15 K in summer, and approximately 20 K differences in both spring and autumn. In the troposphere, temperatures decrease with altitude, exhibiting the smallest variations in summer and a nearly parallel decreasing temperature profile. However, during spring, autumn and winter, a significant inflection point is observed around 10 km in Harbin, Urumqi, Beijing and Shanghai, where the rate of temperature decrease diminishes and even shows a rising trend. In the stratosphere, temperatures increase with altitude, displaying substantial differences between summer and winter. Harbin, Urumqi and Beijing exhibit significantly higher temperatures compared to the other five regions, while spring and autumn temperatures exhibit less disparity. In the mesosphere, temperatures decrease with altitude, reaching values of 160-190 K at the mesosphere's upper boundary. Finally, in the thermosphere, temperatures rise rapidly with altitude, peaking at 440 K. The analysis of temperature profile characteristics in various regions of China reveals that the temperature in the troposphere decreases as the latitude of the region increases.
Comparison of Reference Atmospheric Temperature Profiles in China
During spring, autumn and winter, the troposphere's upper boundary decreases with latitude in eight regions, reaching approximately 15 km in Harbin, Urumqi and Beijing. In summer, the troposphere's upper boundary is around 18 km in all seven regions except Urumqi, where it stands at 16 km. The studies by Qi Chengli reveal significant seasonal and regional variations in the vertical distribution of temperature profiles in China [15,16]. Comparing the reference temperature profiles of GRAP for China with Qi Chengli's studies and the actual situation in China yields similar results, suggesting that latitudinal variations can lead to substantial disparities in tropospheric temperatures. Therefore, further subdivision of the atmospheric reference profile based on latitude holds considerable significance.
Conclusions
Most reference atmospheric profile databases commonly used have limited spatial and temporal resolution. These classification criteria fail to meet the research requirements for global regions with intricate topography and diverse climates. To address this, this study constructs the GRAP. The random forest regression model and the stratified mean method are adopted to process the ACE-FTS L2 products, the AIRS L2 products and the ERA5 reanalysis data. The data is divided into monthly intervals from January to December and spatially organized into 532 grids with dimensions of 5 • × 30 • , creating a comprehensive global atmospheric profile reference database. Leading to the following conclusions: (1) GRAP provides extensive coverage on a global scale, presenting a comprehensive composition of the atmosphere that accurately presents its current state. (2) Four atmospheric reference profile databases were used as input parameters for the RFM radiative transfer model to simulate the FY-3E HIRAS-II absorption spectrum and compare it with the measured spectrum. The results illustrate that the spectral simulation of GRAP as an input parameter is a better fit for the measured HIRAS-II spectrum. (3) The atmospheric profiles of the four reference atmospheric profile databases were compared to the measured atmospheric profiles from NDACC and the column total concentrations measured by WDCGG. The findings indicate substantial updates in the gas components of GRAP compared to the other three databases. Notably, the four greenhouse gases (CO 2 , CH 4 , O 3 and N 2 O) of GRAP demonstrate better alignment with the current atmospheric conditions. (4) Comparing the reference temperature profiles of GRAP for eight distinct regions of China reveals that these profiles effectively capture the climatic conditions. The fine spatial and temporal grids enable GRAP to achieve superior regional representativeness compared to previous reference atmospheric profile databases. Consequently, GRAP exhibits enhanced regional representation capabilities. in this study were downloaded from the NDACC Rapid Delivery (RD) Data Access, for which we would like to express our sincere thanks! | 9,494.8 | 2023-01-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
An Investigation of the Significant Criteria of Vegetation Selection and Planting Arrangement in Designing Urban Nodes
Unresponsive design guidelines for open spaces and the continuous allocation of land for the construction of buildings has led to the ‘concretisation’ of urbanscapes. This has resulted in the urban heat island effect marking the increase in air temperature making urban spaces almost unbearable for urban residents to dwell in. However, new effort is being made to reduce the effect of heat gain at the pedestrian level by planting vegetation in open spaces particularly urban nodes to create a comfortable outdoor environment. Appropriate vegetation selection in landscape design which is capable of reducing outdoor air temperature, is needed in designing urban nodes in hot-humid climates. The study is aimed to investigate the effectiveness of different vegetation types and their planting arrangement in adequately shading outdoor spaces for pedestrian activities. Two node intersections in Kuala Lumpur’s urban space were selected based on vegetation type and planting arrangement. Daylight intensity was measured using the lux meters and the shadow casted underneath the vegetation. Human activity within the area was also observed to determine which vegetation type and planting arrangement is most suitable for pedestrian activity. Results revealed that the vegetation types and its planting arrangements significantly influences the amount of daylight penetrating the tree foliage and shadow casted on the ground which encourages human interaction at the node intersections.
INTRODUCTION
Hot-humid regions are characterised as nonarid climates taking up much of the equatorial belt where day length and temperature remain relatively constant throughout the year.However, developing tropical cities are experiencing microclimatic variations due to rapid urban growth with much reference to the evolving urban environment.This predisposition has increased demand on the comfort requirements in the design of outdoor urban spaces.As comfort at street level in the urban environment deteriorates, urban dwellers are losing their ability to create meaningful relationships with their urban environment and spending longer time indoors in controlled air temperature (Ahmed, 2003).
Asian cities such as Kuala Lumpur (KL) are experiencing unprecedented urbanization that has progressively modified urban spaces, building structures and human activities (Ahmed, 2003) causing rapid warming in cities seen in recent times (Hu and Brunsell, 2015;Stocker et at., 2013;Mayer et al., 2008).Green allocated spaces in the city centre continue to disappear due to vast development of high rise buildings.These high-rises are creating rougher urbanscapes, less windy and often drier environment as compare to their rural counterparts resulting in higher temperatures in the city.Hence, heat waves have become more frequent, more intense and longer lasting (Meehl and Tebaldi 2004;Thorsson, 2010).With the growing urban population, a threat to human health and well-being is posed due to this increased temperature and air pollution in urban environments (Myers andPatz, 2009 andPatz et al., 2005).Lindberg (2011) suggested the use of appropriate vegetation foliage and a suitable planting arrangement in city spaces as a means to provide shade which significantly reduces the outdoor temperature and increases overall comfort in pedestrian zones.In the context of KL, the microclimate has contributed to the public's unwillingness to walk on city streets.The heat condition and constant exposure to the sun could reach alarming levels, where over exposure and physical activity could lead to heatstroke, sunstrokes, muscles cramps, heat exhaustion, severe heat rash and pulmonary disorder (Kleerekoper et al., 2012).To compound to the high air temperature and humidity levels, greater demands are placed annually on the use of mechanical cooling systems in private residents, offices and commercial areas.Car traffic activities in the city centre further contribute to increasing the air temperature of the outdoor environment particularly at street level (Kleerekoper et al., 2012).It can also be said that heat gain has contributed to the immense pressure on private vehicles as a preferred mode of transportation within the city.These are the contributing factors to the urban heat island phenomenon in KL.The temperature in the city centre is the highest due to high density, high rise developments and ground surfaces covered with black tarmac, blocks of marble, granite or tiles which are heat absorbents when exposed to direct sunlight (Elsayed, 2012a).Eliasson (1993), Shashua-Bar and Hoffman (2000) and Elsayed (2009) in their studies confirmed that large green areas have positive effects on the temperature in the city.In order to mitigate the effects of urban heat island in the city, Elsayed (2012a) suggested that well planned tree planting programs should be reinforced in the city of KL as a main strategy to ameliorate the excess heat.
This study investigates the vegetation type and planting arrangement and their performance in reducing daylight intensity and shadow casting ability on open surfaces in KL.In this study, two nodes intersections in KL's urban space were selected and compared in terms of their effectiveness in contributing to a comfortable pedestrian environment.This study is divided into three objectives: i) to identify the vegetation selection and its planting arrangement in the selected urban node intersections, ii) to investigate the effectiveness of the vegetation selection and its planting arrangement in relation to daylight penetration on the ground surface, and iii) to analyse the amount of daylight penetrating the ground surface influencing the human activities at the nodes intersections.
VEGETATION SELECTIONS AND PLANTING ARRANGEMENT
The effect of vegetation on the microclimate, landscape character, temperature control and energy consumption has been measured and evaluated prolifically in literature by Mcpherson (2001), Streiling and Matzarakis (2003), Picot (2004), Shahidan et al. (2010).These studies uncovered that the physical characteristics of a tree are primary factor considered in regulating microclimatic conditions for thermal comfort (Shahidan et al., 2010).
Trees have a multitude of functional, psychological, ecology and aesthetic advantages to the city and its occupants.The tree's canopy is a major component contributing to the microclimatic environment (Shahidan et al., 2010).Shade from a tree's canopy is associated with the vegetation foliage arrangement that significantly influences microclimate factors such as light intensity, wind velocity, solar radiation (Shahidan et al., 2010) and the filtering dust and noise (Lindberg, 2011).The structure of the tree canopy such as its form, height, branching structure, foliage density and leaf cover is vital to the degree of shade created (Kenny et al., 2009a, b;Brown, 2011;Shahidan et al., 2010).
Foliage geometry of tree canopies create shade that can reduce diffused light and glarefrom the sky and surrounding areas -that fall on the ground under the tree, thereby altering the heat exchange between the space below (Shahidan et al., 2010).This is done with the tree's crown consisting of branches, leaves and twigs, providing shade and reducing wind speed (Kenny et al., 2009a, b;Brown, 2011;Shahidan et al., 2010).This has an impact on the comfort of people walking or sitting under the shade (Shahidan et al., 2010).During the day, shading trees also indirectly reduce heat gain by altering terrestrial radiation and ultimately reducing ground surface temperatures (Akbari et al., 2001;Shahidan et al., 2010).
The ability of shading trees to improve comfort levels by intercepting and storing heat from direct solar gains in outdoor spaces leads to a significant reduction in downward energy flow in the form of visible light and solar infrared waves (Kenny et al., 2009a, b;Brown, 2011;Shahidan et al., 2010).According to Brown, (2011) and Shahidan et al. ( 2010), all trees can filter between 80% to 90% of the incoming radiation depending on their leaf density and planting arrangement (of the leaves of the tree and of the trees in the space).
Furthermore, about 20% of the infrared is absorbed, 50% is reflected and only 30% is transmitted.Cumulatively, a total of approximately 50% of visible and infrared radiation are absorbed, 30% is reflected and only 20% is transmitted (refer Figure 1).The more layers of leaves added, the better the efficiency at decreasing solar radiation under the tree canopy by shading (Shahidan et al., 2010).Therefore, the denser the tree foliage and the closer the trees are to each other, the lesser the amount of visible and infrared radiation falling on the ground underneath the tree canopy.A tree's shading performance differs with each species and their radiation filtration effectiveness will influence microclimate modification (Shahidan et al., 2010).This makes it necessary to investigate different species shading capacity and their planting arrangement to understand the impact of each vegetation type on outdoor comfort levels for pedestrian activity.This paper does not directly measure the infrared radiation under the tree but uses the light intensity to measure the shading provided by the vegetation.The research uses the theory presented by Kenny et al. (2009a, b), Brown (2011) and Shahidan et al. (2010) to investigate the amount of daylight penetrating the tree foliage and the shadow casted on the ground to measure the shading capacity of different tree types.
Nodes are described as strategic points where people enter and exit the city.The nodes are typically the intensive foci of an area that is embedded within the neighbourhood and ties firmly into major features of the city.This research looks at node intersections that are typically places of break from continuous movements, occurring at crossings or convergence of paths.Kevin Lynch (1960) described these elements as junctions and functioning as nodes that make the city legible.The dynamic node is a junction point from which movement flows in and out where people make decisions for their desired direction.An appropriate urban node design is essential because people's attentions are heightened as they perceive nearby elements with more clarity than usual (Worrell, 2011).
RESEARCH METHODOLOGY
The study employs an exploratory research methodology in eliciting data which consist of an observation and case study.The light penetration measurement and shadow casting analysis were carried out during the site visit to the study areas.Vegetation types were identified on the site and the planting arrangement was recorded for information on which tree canopy and planting arrangement provide the most shade at ground level by measuring the light intensity and visually examining the shadow casted.Photographic records were taken at different hours of the day to visually determine the degree of shade provision.
OBSERVATION ON THE CASE STUDY LOCATION
Site visit and investigation were conducted frequently within 4 weeks in two selected nodes intersections coded as Site A for the intersection of Jalan Raja Laut, Jalan Parlimen and Jalan Tun Perak; and Site B for the intersection of Jalan Pinang and Jalan P. Ramlee (Figure 2).Vegetation types and planting arrangements in Sites A and B were mapped out.Observation of human activities underneath the tree foliage were made to identify the appropriateness of the vegetation selection and planting arrangements of each site.
LIGHT PENETRATION MEASUREMENT
The research focused on the daylight intensity to measure the degree of light penetrating the tree foliage in the case study sites.The measurements were taken over a seven days period during different hours of the day in March representing the hottest month due to equinox.Daylight intensity data were recorded using the Lux-meter to measure sunlight illuminance in Klux.The measurement was taken at 3 areas within the site which were: i.Under the tree foliage; ii.
3 meters away from the tree foliage and; iii.
At the centre of the hard surface exposed to sunlight.
SHADOW CASTING STUDY
In order to validate the daylight intensity data, the visual examination of the shadow casted was carried out.The shadow lengths were also measured from the tree trunk to the end of shadow line at three different times of the day (9.00a.m,12.30p.mand 5.00p.m).
STUDY LOCATIONS
The (Elsayed, 2006).Elsayed continues that traffic activities in the city centre of KL also contribute to high temperatures on the overall outdoor environment particularly on the street level.To reduce the heat temperature in the city and to transform KL into a world-class city by 2020, Dewan Bandaraya Kuala Lumpur (DBKL), the city hall, has targeted to increase the greenery in the city by planting 100,000 large-coverage trees.
Thus this paper is to study the effectiveness of those initiatives by investigating the selection of vegetation species and its planting arrangements at the main street intersections which act as nodes.
URBAN NODES SELECTION
The two selected node intersections are i) Jalan Raja Laut, Jalan Parlimen and Jalan Tun Perak (referred to as Site A); and ii) Jalan Pinang and Jalan P. Ramlee (referred to as Site B).Both sites are illustrated in Figure 2.Both node intersections are located at primary streets in an urban commercial centre.The sites are constantly busy with motorized and nonmotorised transport movements during the days and nights.The areas are packed with high rise office buildings with few green spaces or shading trees.Site A is directly connected to Jalan Tuanku Abdul Rahman, which is one of the first street built in KL.Site B is located at the outskirts of Suria Kuala Lumpur City Centre (KLCC) mega development.The site is useful for connecting pedestrians walking at ground level in the Suria KLCC area to surrounding buildings.It is used as a rest space for people on foot with a considerable pedestrian volume.Site A was designed in a sparse arrangement with three different tree types namely Bucida molineti variegated, Livistona rotundifolia, and Peltophorum pterocarpum.
There was approximately 3.5 meters distance between each tree.The trees were arranged in a formal planting arrangement with only part of the site covered by trees as can be seen in Figure 3 (left).On Site B, only one type of tree was identified in the design of the space namely Hopea odorata.The tree is planted in a grid arrangement covering the entire site with a 1.5 meter distance between trees.The proximity of one tree to the next on both sites has allowed the canopies to overlap and provide greater resistance to visible and infrared radiation and casts a darker and longer shadow on the ground below thus providing more shade and reducing air temperature at the intersections.
VEGETATION IDENTIFICATION AND CHARACTERISTICS
Livistona rotundifolia is a palm species with a single trunk with feather shape compound frond.The lower leaves in the tree structure thus allowing visual interaction for the pedestrians, which enhance visual safety.
Bucida molineti variegated has its branches typically horizontal giving it a layered appearance, pointing skywards in symmetry.
Finally, Peltophorum pterocarpum is a deciduous tree with oblong, spreading leaves.
They were planted at the edge of the square along Site A characterized by its large canopy which casts a huge shade towards the square.The trees botanic descriptions on Site A are as below. i.
Bucida molineti variegated: branches pointing skywards in symmetry; twigs growing densely in storeys on whorls around the trunk.Leaves are tiny and variegated; more often used as an ornamental and shading plant; 12 meter mature height (refer to Table 1) ii.
On Site B, the Hopea odorata tree is characterised by its conical shape with simple and alternate leaf as discussed in Table 1.The tree botanic description on Site B is as below.
Hopea odorata trees in Site B are maintained regularly and shaped by pruning the lower branches.This has created a clear space beneath the trees making movement easy and also increasing visual safety.
VEGETATION IDENTIFICATION AND HUMAN ACTIVITY
The study also looked into the relationship between the tree types and human activities.Findings in Site A shows most of the pedestrians did not use the sidewalk underneath Livistona rotundifolia and Bucida molineti variegated as an interaction or activity space (refer Figure 4) instead they use the sheltered sidewalk under Peltophorum pterocarpum for pedestrian movement (7% of total space).65% of the total space area was covered by the hard surface which raised the ambient temperature during midday and through the evening.This created an uncomfortable environment which became less interactive indicated by the absence of pedestrians in the space throughout the day.Contrary to the observation of Site A, the space in Site B is used for leisure such as relaxing, sitting, standing and taking naps because of its shadiness and breeziness.The observation showed pedestrians walking through the space and stopping for other activities such as sitting on benches within the space, or just stopping and standing for a few minutes to enjoy the place's ambience.This ambiance creates convenient and comfortable green space which formed a sense of enclosure that makes the public feel safe and secure as shown in Figure 5.
LIGHT INTENSITY DATA
On Site A, weekly light intensity data was taken at several positions including: i) under the vegetation; ii) 3 meters outside vegetation and at the centre of the hard surface with no shading; and iii) at four different times (11:30 a.m, 12:30 p.m, 1:30 p.m and 2:30 p.m) using the Lux meter.Three readings were taken at each point and the findings were averaged.The results of the intensity of visible light are shown in Figure 6.The line chart (Figure 6) shows a gradual increase in light intensity in the morning from 11.30 a.m till 12.30 p.mThis is due to Kuala Lumpur experiencing of the overhead sun as its located near the equator (3.1333° N, 101.6833°E) at noon.For this reason, the highest amount of visible and infrared radiation was received at 12:30 p.m Beyond 12.30 p.m, the light intensity is seen to decrease gradually both outside and under the trees.The graph further shows that at 12:30 p.m, the centre of the square and 3 meters outside the vegetation shade, receive the highest amount of sunlight (78.22 Klux) due to the absence of a vegetative cover.The amount of visible light coming directly from the sun was reduced by more than half (53.44%) under Bucida molineti variegated when compared to the uncovered area.It was further reduced by 61.74% under Livistona rotundifolia and further reduced by 89.14% under Peltophorum pterocarpum.This has shown that Peltophorum pterocarpum's trees foliage has filtered the highest amount of visible light thus expected to have the highest shading capacity.x Outside the trees On Site B, weekly light intensity data were recorded.The measurement shows that the highest light intensity was received at 12:30 p.m the graph shows that at this hour, the centre of the space and 3 meters outside the vegetation areas received the highest amount of visible light (56.06Klux).In comparison to Site A (78.22 Klux), the planting arrangement of the vegetation had a significant role in reducing the amount of light penetration on the ground surface.The dense planting arrangement covers approximately 90% of the total area on Site B. Only 1.5 Klux of visible light penetrated through the tree foliage.98.23% of the visible light was reduced under Hopea odorata (refer to figure 7).This shows that a single vegetation selection of Hopea Odorata with dense and close arrangement has successfully filtered the highest amount of daylight and ultimately reducing terrestrial radiation and contributing to increased outdoor thermal comfort.
SHADOW CASTING STUDY
The shadow casting study was conducted to validate the light intensity data.The shadow cast by each tree type is measured by the length of the shadow at different times of the day.The shadow is also visually examined with the aid of photographs.Measurements were recorded at three different times: 9.00 a.m, 12.30 p.m and 5.00 p.m Site A shows that deciduous species, Peltophorum pterocarpum provides the largest shadow coverage with its foliage, during the hottest hours around (12.30 p.m).This can be observed from the shadow lengths measured from the tree trunk to the highest point on the shadow cast (D1=4.8mand D2=4.3m) as shown in Table 2. Livistona rotundifolia casts the second largest shadow coverage with its foliage with a shadow length of D1=2.7m and D2=2.3m during the hottest hours.Bucida molineti variegated cast the shortest shadow coverage with a shadow length of D1=2.2 m and D2=1.7 m.Bucida molineti variegated with its oval to rounded crown and alternate leaf shape allows much more sunlight to penetration its foliage as compare to Peltophorum pterocarpum that has spreading leaf form and random, unsymmetrical branch and leaf growth.Results show that the type of tree with similar foliage characteristic to Peltophorum pterocarpum can reduce the amount of daylight penetrating the leaf foliage of the space under.
Similar to Peltophorum pterocarpum in Site A, the Hopea odorata in Site B is also categorised as a deciduous tree.Although this species is shaped as conical with simple and alternate leaf, it could also filter large amounts of daylight mainly because of the grid and close proximity arrangement between trees.The denseness of the tree planting arrangement in Site B is successfully providing the shade under the tree from 9.00 a.m until 5.00 p.m as shown in the shadow casted image in Table 3.The data collected on different tree capacity to provide shade and planting arrangements in Site A and B revealed that selecting the appropriate vegetation type (with dense foliage) and the planting arrangement (proximity of one tree from the other) has significant influence on the amount of daylight penetrating the foliage to the space below and the shadow formed on the ground.The shade provided can further enhance the ambience of the area for pedestrians.The results show that trees can reduce around 47% to 99% of daylight penetration depending on the vegetation types and planting arrangement in congruence with Kenny et al., (2009a, b) and Brown (2011) theories.
Vegetation types planted at both study locations have given different percentages of light intensity values that explain the effectiveness of their characteristics and planting arrangement.Of the four vegetation types, trees with foliage characteristics similar to Peltophorum pterocarpum in Site A and Hopea odorata species in Site B provide greater shelter from visible light thereby casting more shadow over the site to the ground surface.The case studies investigated showed that Hopea odorata in Site B with conical shaped leaf arrangements and simple alternate leaf shape -expected to have less shading capacitycan become very effective when arranged appropriately as is done in Site B in a grid arrangement with small distances between trees.Furthermore, tree types like Peltophorum pterocarpum (though capable of significant reducing the amount of daylight flowing through as a single tree, as compared to Hopea odorata) can be less successful in reducing the amount of daylight penetrating the tree foliage in a sparse arrangement as can be observed on Site A.
As mentioned earlier in section 3.5, the sites selected are at nodes intersections in proximity to major commercial areas and Primary Street that is used as a means to connect between places.The hot-humid climate and heat gain in Kuala Lumpur prevent many from walking.Nodes are an important element in such situations because they serve to break the prolonged pedestrian daily trips.They provide a space where they can rest and enjoy an aesthetically sheltered urban environment as protection from the hot sun.Providing more spaces that can sufficiently shelter and permit pedestrian activities can significantly boost commercial activities within the area as it acts as a means to attract more people to walk in urban public spaces (Gehl and Gemzoe, 2004;Lynch, 1960).Mapping out pedestrian routes and providing shading trees can alter the urbanscape to inject human activity back to the city streets as is observed in Site B where pedestrians have laid claim to the space.This is not the situation in Site A, even though the node intersection can serve a greater purpose due to its proximity to heritage and commercial sites in KL such as Jalan Tuanku Abdul Rahman, Masjid Jamek and Merdeka Square.Simple gestures such as the provision of vegetations in a proper planting arrangement for shading and thermal comfort of an outdoor environment can enhance the urban public spaces.
CONCLUSION
This paper set out to illustrate the effectiveness in shade creation of four vegetation types found on two different node intersections in providing shade and ultimately improving thermal comfort in hot-humid regions.This study has shown that some vegetations like Peltophorum pterocarpum in Site A and Hopea odorata in Site B have the ability to reduce light intensity and improve outdoor thermal comfort.For these vegetations to be more efficient, the physical character of the canopy, denseness of vegetation foliage, multiple layer arrangement of the leaves and the planting arrangement need to be taken into consideration during the design stage of the streetscape.Using Kenny et al. (2009a, b), Brown (2011) theories, it was assumed in the study that all vegetations will absorb a significant amount of visible infrared radiation depending on the degree of the shade provided.Furthermore, the degree of shading is dependent on the amount of daylight penetrating the foliage of the vegetations.Thus, the results imply that vegetation foliage characteristic is a critical factor that should be considered in respect to thermal comfort in selecting vegetation in urban spaces intended for human activity.Furthermore, appropriate planning with the planting arrangement and the tree type in the earlier stages of the design can help provide a thermally comfortable outdoor space for pedestrians.Increasing the number of vegetation in urban context could promote interactive open spaces with positive activities within the node intersections and attract more people to walk on city streets.The study is only at its initial stages and the preliminary part of the findings is presented due to time constraints.The extended study is underway and will consider both visible and infrared radiation measurements as is presented in previous literature investigating appropriate vegetation type for hot-humid climates.The research findings are most useful to designers to inform and help in understanding some of the necessary measures using appropriate vegetation types and planting arrangement, in designing urban nodes that are thermally comfortable for human activities in the urban environment.
Figure 2 :
Figure 2: Nodes intersection Site A (left) and B (right) (Source: Google maps)
Figure 3 :
Figure 3: Layout of nodes intersection at Sites A (left) and B (right)
Figure 4 :
Figure 4: Formal clustered vegetation arrangement in grid layout in Site A
Figure 5 :
Figure 5: Single tree selection arranged in a grid and close planting arrangement Site B.
Figure 6 :
Figure 6: Average weekly light intensity underneath the vegetation and outside of the vegetation area in Site A.
Figure 7 :
Figure 7: Average weekly light intensity underneath the trees and outside of the trees area in Site B node intersection.
study was carried out on two nodes at intersections in two commercial districts in the city centre of KL.KL is located in the Klang Valley between latitude 3 o 08′ North and longitude 101° 44′ East.It has low variations of temperature throughout the year.KL has a hot-humid climate and experiences a wet tropical climate, in which the months of April-May and October-November can be considered as the wettest months, while December-March and June-September are the driest.During the day, the temperature ranges between 29 -32 o C, while a temperature of about 22 -24 ºC is recorded at night(Elsayed, 2012b).Within the city of KL, many open areas are covered with blocks of marble, granite or tile.Although these materials store less heat than black tarmac, they still absorb large amounts of heat from direct sunlight and release the heat during late afternoons, evenings and early nights
Table 1 :
Characteristics of the vegetation in Sites A and B nodes intersections. | 6,204.8 | 2016-12-29T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Leveraging Google Earth Engine for Drought Assessment using Global Soil Moisture Data
Soil moisture is considered a key variable to assess crop and drought conditions. However, readily available soil moisture datasets developed for monitoring agricultural drought conditions are uncommon. The aim of this work is to examine two global soil moisture data sets and a set of soil moisture web-based processing tools developed to demonstrate the value of the soil moisture data for drought monitoring and crop forecasting using Google Earth Engine (GEE). The two global soil moisture data sets discussed in the paper are generated by integrating Soil Moisture Ocean Salinity (SMOS) and Soil Moisture Active Passive (SMAP) satellite-derived observations into the modified two-layer Palmer model using a 1-D Ensemble Kalman Filter (EnKF) data assimilation approach. The web-based tools are designed to explore soil moisture variability as a function of land cover change and to easily estimate drought characteristics such as drought duration and intensity using soil moisture anomalies, and to inter-compare them against alternative drought indicators. To demonstrate the utility of these tools for agricultural drought monitoring, the soil moisture products, vegetation- and precipitation-based products are assessed over drought prone regions in South Africa and Ethiopia. Overall, the 3-month scale Standardized Precipitation Index (SPI) and Normalized Vegetation Index (NDVI) showed higher agreement with the root zone soil moisture anomalies. Soil moisture anomalies exhibited lower drought duration but higher intensity compare to SPIs. Inclusion of the global soil moisture data into GEE data catalog and the development of the web-based tools described in the paper enable a vast diversity of users to quickly and easily assess the impact of drought and improve planning related to drought risk assessment and early warning. GEE also improves the accessibility and usability of the earth observation data and related tools by making them available to a wide range of researchers and the public. In particular, the cloud-based nature of GEE is useful for providing access to the soil moisture data and scripts to users in developing countries that lack adequate observational soil moisture data or the necessary computational resources required to develop them.
estimates produced by the PM can be improved by assimilating satellite derived observations [12,13]. Here we focus on the operational implementation of the DA enhanced PM using soil moisture retrievals from two passive microwave missions, the European Space Agency (ESA)'s Soil Moisture and Ocean Salinity (SMOS) [14] and the National Aeronautics Space Agency (NASA)'s Soil Moisture Active Passive (SMAP) [15]. SMOS andSMAP, launched in 2009 and, respectively are the first two missions specifically designed to monitor near-surface soil moisture at a global scale using L-band frequency.
The goal of this paper is to announce the availability of these global soil moisture data sets and demonstrate their value for drought monitoring using Google Earth Engine (GEE). GEE is a web-based service that stores a petabyte archive of earth observations and related data and provides an efficient processing software which enables users to develop complex geospatial analyses and visualizations utilizing high-performance computing resources. The GEE capabilities have been utilized for a range of applications, including soil mapping, malaria risk assessment, and automated cropland mapping [16][17][18][19]. In this study, we demonstrate the value of the SMOS-and SMAP-datasets and web-based tools utilizing the global soil moisture data set generated using the satellite-enhanced PM available in the GEE data catalog. GEE and the available tools enable users to acquire, process, analyze and visualize earth observing data rapidly for any user specified region across the globe without downloading and processing a large volume of data on the user's desktop. The web-based drought assessment tools alleviate the need for users to install and work with desktop data managing and processing software which are often labor intensive, time consuming and difficult to reproduce, thereby overcoming compatibility limitations and enhancing usability and reproducibility of the analyses and results.
The paper is organized as follows: Section 2 provides a detailed description of the soil moisture data, modeling approach and preparation steps for integrating the data into the GEE platform; Section 3 focuses on the functionality of the GEE tools developed for drought assessment using the satellite-enhanced PM global soil moisture data; Section 4 describes the application of the GEE tools over South Africa and Ethiopia; and Section 5 and 6 provide some discussion of the results and conclusion, respectively.
Data processing for GEE platform
An overview of major methodological steps applied in this study is provided in Figure 1. First, we processed satellite-based soil moisture data sets to estimate surface and RZSM, and their anomalies. Then, we used RZSM and precipitation data to explore their spatial and temporal variability with different land cover types. Next, we estimated drought characteristics from RZSM anomalies and compare against other alternative drought indices. Details about these data sets are provided in Table 1. One of the primary goals of this study is to introduce global soil moisture data sets in the GEE, hence we provide details description of the soil moisture data sets in the following sub-section. The two-layer Palmer Model used by USDA-FAS is a bookkeeping water balance model that accounts for the water gained by precipitation and lost by evapotranspiration [10]. The top layer is assumed to have 2.54 cm available water holding capacity at saturation, while the holding capacity of the lower layer varies depending on the depth of the bedrock. The model is driven by daily precipitation data and daily minimum and maximum temperature observations provided by the U.S. Air Force 557 th Weather Wing (formerly known as U.S. Air Force Weather Agency, AFWA). The AFWA data set is derived using multiple sources, inducting remotely sensed observations and gauge data acquired from the World Meteorological Organization (WMO). The model is enhanced by adding a data assimilation unit, which allows the routine integration of satellite-based observations into the model using a 1-D Ensemble Kalman Filter approach (EnKF) [11,12]. The purpose of this modification is to improve the PM RZSM information by integrating added value of the surface soil moisture retrievals to the model and examining their potential to correct for meteorological forcing uncertainty. A detailed description of Bayesian theory-based filtering, including the EnKF is beyond the scope of this paper, however, the methods are well-established and documented [26][27][28][29][30].
EnKF is a sequential Monte Carlo assimilation technique, where the model forecasts are optimally updated in response to the satellite observations via the Kalman gain (K). Operational implementation of EnKF requires some knowledge of the model uncertainty (Q) and the error of the satellite observations (R). Here, both of these parameters have been parameterized using some a priori knowledge. Given the above discussed and wellestablished dependence of the model accuracy on the uncertainty of the rainfall data, and the fact that the AFWA rainfall data set is rain-gauge corrected, R has been modeled as a function of proximity to WMO gauge station. Q on the other hand has been parameterized as a land cover type using published accuracy assessment analysis [31][32][33][34][35][36][37]. We discuss the implementation of the satellite enhanced PM using remotely sensed observations acquired from two L-band missions, SMOS and SMAP. Full technical description of these missions can be found in [14] and [38], respectively.
The corresponding SMOS and SMAP soil moisture estimates assimilated into the PM are derived using slightly different retrieval approaches, however, both systems and soil moisture products show similar performance and overall accuracy [39,40]. The global SMOS soil moisture data are operationally acquired from the NOAA Soil Moisture Products System (SMOPS), which are distributed at 0.25° grid spacing (https://data.nodc.noaa.gov/ cgi-bin/iso?id=gov.noaa.ncdc:C00994; last accessed May 2018). SMAP offers a variety of soil moisture products (https://smap.jpl.nasa.gov/data/; last accessed May 2018). This study applies the L3 passive only SMAP soil moisture product. The data are routinely downloaded from the National Snow and Ice Data Center (https://nsidc.org/data/SPL3SMP/versions/4; last accessed May 2018). SMAP is distributed in EASE-grid 2 projection at 36 km grid spacing; therefore, the data have been pre-processed to match the model grid of 0.25°. b. Operational Implementation: The satellite enhanced Palmer Model is set to run operationally on the NASA's Global Inventory Modeling and Mapping Studies (GIMMS) Global Agricultural Monitoring (GLAM) system [23]. The model covers the Land Information System (LIS) domain (180°W − 180°E, 90°N-60°S) at 0.25° [41]. The system generates various soil moisture products, all are exported in GRIB format: surface and root zone soil moisture measured in [mm], profile soil moisture in [%] and surface and root zone soil moisture anomalies [-]. The latter represents standardized anomalies, which are calculated using following equation: Where, X SM is the SMOS/SMAP soil moisture, μ SM is the mean value, and σ SM is the standard deviation of the SMOS/SMAP soil moisture. Each value shows the deviation of the current conditions relative to a long-term average standardized by the climatological standard deviation, where the climatology values are estimated based on the full data record of the satellite observation period over a 31-day moving window (e.g., climatology of a day of interest is calculated using the 15 days prior and 15 days after that day of year for the entire historical record). Negative anomaly values indicate that the current conditions are below average, while positive indicate surplus of water.
The system is executed daily as new AFWA and satellite observations become available. However, SMOS and SMAP provide complete global coverage every 3 days, therefore, the output generated from the satellite-enhanced PM is binned to 3-days composites. Once a new 3-days composite product is produced, the data are operationally pushed to USDA-FAS and the data are automatically displayed at the agency's Crop Explorer web site. It should be noted that the SMOS-and SMAP-based systems are currently run independently and are expected to have slightly different climatologies given that each cover different time period (PM+SMOS: January 2010 to present; PM+SMAP: April 2015 to present).
Ancillary data sets:
Several additional data sets have been used in this study to explore the relationship between RZSM anomalies and meteorological drought indices as a function of land cover variability. The Climate Hazards Group Infrared Precipitation with Station (CHIRPS) data set, developed by the United States Geological Survey (USGS) in collaboration with Earth Resource Observation and Science (EROS) center is used to explore the spatial and temporal variability of the precipitation with different land cover types. CHIRPS is generated by integrating satellite imageries and in-situ gauge-collected observations. The daily rainfall data are distributed at 0.5° spatial resolution [42]. Vegetation type information was obtained from the ESA's global land cover data developed by utilizing observations from Medium Resolution Imaging Spectrometer (MERIS) collected by the Environmental Satellite (ENVISAT) [43]. The land cover map includes 22 land cover classes as defined by the Food and Agriculture Organization of the United Nations (FAO) Land Cover Classification System (LCCS).
The Standardized Precipitation Index (SPI) is a meteorological drought index used to assess different drought characteristics. SPI represents the standardized deviation of the observed cumulative precipitation relative to the long-term precipitation average. In this study, SPI at 3, 6 and 9-month scales were obtained from the International Research Institute for Climate and Society (IRI) at Columbia University [22]. The SPI data set was derived from the monthly precipitation totals from the Climate Prediction Center's (CPC) gauge -Outgoing Longwave Radiation (OLR) Blended global daily precipitation data. SPI was calculated by fitting a probability distribution to the long-term series of precipitation accumulation over the period of interest, where the resulting cumulative probability function is consequently transferred to a normal distribution. The monthly SPI data offers global coverage at spatial resolution of 1°.
The Normalized Difference Vegetation Index (NDVI) data were obtained from the Global Inventory Modeling and Mapping Studies (GIMMS) Global Agricultural Monitoring (GLAM) system. This dataset is derived using the Resolution Imaging Spectroradiometer (MODIS) Terra surface reflectance products, which are provided by National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) MODIS Adaptive Processing System (MODAPS) [44]. SPI and NDVI data sets are not available in the GEE public data catalog, hence, they were processed and ingested as personal assets in the GEE.
All data products used in this study have been averaged to monthly composites and then resampled to 1° grid spacing to ensure comparable temporal and spatial resolutions among the different datasets.
Google Earth Engine Tools:
We have developed several GEE tools which enable easy processing, analysis, and visualization of SMOS-and SMAP-based soil moisture data in the GEE platform. These tools can be arranged in three groups according to their functionality: (1) tools to process and ingest soil moisture data in the GEE data catalog, (2) tools to explore spatial and temporal variation of soil moisture and precipitation as a function of land cover, and (3) tools to estimate drought characteristics such as duration and intensity using soil moisture anomalies and inter-compare the latter against alternative drought indices. Detailed description of the individual tools is given below.
Data uploading routine:
The data upload routine has been designed to process, upload, and manage the SMOS-and SMAP-based soil moisture products in the GEE platform. This routine first converts the original soil moisture data stored in binary format into Georeferenced Tagged Image File Format (GeoTIFF) format as required by the GEE. Then, it creates a metadata file of the resulting imagery. The metadata is needed by the analysis routines, which is used to filter the data based on user specified spatial and temporal information. Next, the GEE Batch Asset Manager (https://github.com/tracek/gee_asset_manager) tool is used to upload bulk amount of data automatically in the GEE ( Figure 2). An alternative uploading option is to use the Asset Manager option in the GEE. However, the latter is time inefficient for large data sets as it allows the user to upload a single imagery at a time.
Soil Moisture Exploration routine:
The soil moisture exploration routine has been specifically designed to assess the spatial and temporal variability of soil moisture from local to regional and global scale. This function first filters the soil moisture data based on user specified temporal and spatial criteria using GEE 'filterDate' and 'filterBounds'; functions.. Next, the subset data are grouped by month using 'Filter.calenderRange' function and aggregated from the original 3-days composites into monthly composites. Then, interactive monthly soil moisture plots are generated using the chart function by using Chart image series function in GEE, which can be viewed and exported in multiple formats, i.e. Comma Separated Values (CSV), Portable Network Graphics (PNG), etc. The multi-annual image collection can be further reduced to a longterm average image representing mean soil moisture for the region of interest, which can be visualized through GEE Google Maps. This routine also enables an assessment of the variability of soil moisture data as a function of land cover type. The ESA land cover data is clipped based on the user-defined region of interest and a histogram is plotted to estimate the major land cover types of the study region. Then, the monthly soil moisture values are filtered based on land cover class and interactive plots are generated for additional analysis and visualization.
Drought assessment routine:
The drought assessment routine has been developed using GEE functionalities to compare various drought indicators based on specific drought characteristics such as percentage of month with drought conditions, maximum drought duration, drought severity, and intensity. In addition any drought indicators have been developed to monitor, predict and assess the severity of different drought types, which can be classified into two major categories of drought: meteorological and agricultural. Meteorological drought indicators are derived using precipitation data and have multiscale features that identify different types of drought condition. As root-zone soil moisture affects plants growth and productivity, RZSM anomalies are often used for quantifying and monitoring agricultural drought and capturing its impact on crop heath [4,[45][46][47]. In this study, we used four drought indicators-SPI3, SPI6, SPI9 (meteorological drought indicators) and SMOS RZSM anomalies (agricultural drought indicator) to asses drought conditions. Here we focused on SMOS RZSM anomalies as it has longer observation period compared to SMAP, hence well suited to estimate drought characteristics and Pearson's correlation coefficient. Positive values of RZSM anomalies, SPI3, SPI6 and SPI9 are masked out to identify only months with drought conditions, the corresponding number is divided by the total number of months to calculate the percentage of months with drought condition. Each product has been examined in terms of the following drought characteristics: drought duration (defined as the period during which the drought indices are continuously negative); drought severity (computed as the absolute value of the sum of all drought indices during a drought event); and drought intensity (calculated by dividing the severity by the drought duration) [48,49]. This routine also computes cross correlations between agricultural and meteorological based drought indices and allows to estimate the Pearson, Spearman and lag correlation coefficients using GEE 'Reducer.pearsonsCorrleation' function between the paired monthly time series of soil moisture anomalies and SPI as well soil moisture and NDVI (Figure 4).
Developed tools are accessible through the links provided in the supporting materials, though potential users are required to register to access (https:// code.earthengine.google.com/). Once the link is clicked, the user is presented with Earth Engine Code Editor which is a web-based Integrated Development Environment (IDE) for the Earth Engine JavaScript API ( Figure 5). Then, user can execute the program by clicking the 'run' button located above the JavaScript code editor panel, if it does not start automatically. Once this has been done, time series plots and spatial map are displayed in the Console Tab and Google Maps respectively. The GEE outputs results can be exported by clicking on the run button in the Tasks tab located in the right panel next to the code editor.
Example Applications
The GEE tools described in the previous section have been implemented to evaluate the spatial and temporal dynamics of soil moisture and precipitation, and assess the ability of the drought indices described in the previous to capture severity, duration and intensity of drought events over South Africa and Ethiopia during 2010-2017.
Drought is common in South Africa and Ethiopia and occurs in all climate areas with varying degrees of intensity, spatial extent, and duration [50]. In recent years the spatial extent and frequency of drought have increased in this area causing significant water shortage, economic losses and adverse social consequences [51]. Therefore, better understanding of the climatology and drought characteristics over these areas is important in order to improve decision-making and aid activities aimed to mitigate the impact of drought. Our analysis is focused on the 2010-2017-time period, which was determined by the availability of the SMOS data sets. For this analysis, the soil moisture explorer routine is executed in the GEE code editor by clicking on the link provided in the supporting materials to generate spatial map and time series plots of the precipitation and soil moister over South Africa. Then, we run the drought assessment routine to estimate drought characteristics and correlation among different drought indices over South Africa. Next, we re-run both the soil moisture explorer and drought assessment routine for Ethiopia by changing the country name inside the script. The output results of the GEE are imported into ArcGIS [52] to add legend, scale and proper color scheme and R [53] to generate box plot of drought characteristics.
Spatial and Temporal variability of Precipitation and Soil Moisture:
We first examined the long-term spatial distribution of the precipitation and RZSM and then analyzed the variability of those variables with different land cover types. Spatial variability of rainfall and RZSM over South Africa and Ethiopia are shown in Figure 6. Both variables exhibit high regional variability. In South Africa, generally the mean annual precipitation increases from west to east with the maximum rainfall (680 mm) occurring over the Mpumalanga and KwaZulu-Natal, while minimum rainfall (172 mm) falls over the western part of the country. The spatial variability captured by the RZSM reflects the precipitation variability showing wetter SM conditions in the east and dryer in the west (Figure 1, top row). The topographical variability significantly influences the spatial distribution of the precipitation and the soil moisture in Ethiopia. For example, the rainfall and soil moisture values are higher over the highland areas located in the central and north-western portion of the country, while the lowland areas located in the eastern part of the country are associated with lower rainfall amounts ( Figure 1, bottom row).
The monthly precipitation over Ethiopia and South Africa is driven mainly by the position of Intertropical Convergence Zone (ITCZ), which changes over the course of year [54,55]. A majority of the rainfall in Ethiopia falls during the summer seasons when the ITCZ is at its most northern position, however, the amount of rainfall also varies as a function of land cover. For example, forest, cropland, grassland and shrub land show identical rainfall patterns with one main wet season (June to September), and a secondary wet season (February to May), where the highest rainfall occurs during the month of September ( Figure 8). Over sparse vegetation, the major rainfall falls during the summer and winter season as most of this land cover is located in the southern part of the country, where the rainfall timing is associated with ITCZ, which passes through the southern position of the equator at that time. The monthly RZSM follows the rainfall distribution reaching the wettest soil moisture conditions during the month of September. The position of ITCZ also results in two distinct seasons in South Africa -a wet and dry season roughly from November to April, and May to October, respectively. The monthly soil moisture time series captures this seasonality, as seen in Figure 2. The monthly rainfall and soil moisture time series across South Africa vary with land cover, where regions covered by the mosaic and sparse vegetation receive highest and lowest amount of precipitation and soil moisture, respectively ( Figure 8).
Comparison of Drought characteristics
The RZSM anomalies indicated higher percentage of months with drought conditions compared to SPI (SPI3, SPI6 and SPI9) over both study regions ( Figure 9). Over South Africa, the average percentage of drought events identified in the RZSM anomaly data was 27%, which is 6% higher than the drought events captured by the SPI3. Additionally, among the rainfall-based drought indices, the SPI9 had the lowest percentage of months with drought events compared to the SPI3 and SPI6. This is in line with other studies [48], where the author found that agricultural-based drought indicators depict relatively larger values of drought months compared to the meteorological drought indices. The maximum drought duration varied among the different drought indicators. Based on our analysis, the maximum drought duration appeared to be higher in the meteorological-based indices than the agricultural-based indices. This is primarily because the meteorological-based drought indices integrate the drought condition over longer period of time than the agricultural-based drought indices [48]. The drought intensity was found to be higher in the agricultural-based drought variables and lower in the metrological-based drought indices. This example demonstrates the capability of the drought assessment tools, which can help to better assess the drought conditions.
Correlation between soil moisture and NDVI anomalies
Variations in RZSM substantially influence the vegetation dynamics (i.e., NDVI), which is a widely used vegetation index. Therefore, correlation analysis between RZSM and NDVI anomalies is important to understand the impact of changes in soil moisture on vegetation growth, which can be effectively utilized for early warning of time and areas of increased food insecurity [42,56]. The correlation of the RZSM and NDVI anomalies varied with the geographic location and the degree of lag time. The highest positive correlation coefficients and confidence level (i.e. p-vale < 0.1) are observed when soil moisture change is concurrent or precedes the change in NDVI by one month. In most of the locations NDVI and RZSM anomalies have positive correlations, however some regions indicate negative correlation at higher lags due to coincidence of negative NDVI anomalies with positive soil moisture anomalies [57]. In South Africa, the semi-arid Western Cape and Eastern Cape show higher coefficients compared to other parts of the country as the vegetation growth in those regions has high reliance on root zone soil moisture. [58,59]. No spatial variability in the lag correlation values was observed over Ethiopia ( Figure 10). We further investigated the variation of the soil moisture and NDVI relationship as a function of major land cover types. The highest agreement was found over areas covered by grass land for both study areas, while the lowest agreement was achieved over the shrub covered areas in Ethiopia and South Africa ( Table 2). This is partly due to the fact that grassland roots are located on the shallow depths and are more sensitive to changes in soil moisture than deep rooted plants such as shrub.
Correlation among soil moisture anomalies and meteorological based indices
A correlation analysis was carried out between RZSM anomalies and meteorological based drought indicators to evaluate how well the meteorological based drought indicators represent agricultural-based drought. Such information could be used to help indicate times and areas that are likely to experience agricultural stress. It is envisaged that such approaches will improve drought monitoring and early warning systems that rely mostly on meteorological indicators [60]. The GEE-based inter-comparative analysis between soil moisture anomalies against SPI showed high agreement and alludes to the value of combining such datasets to compliment a regional drought assessment that incorporates both meteorological and agricultural drought. Over both study regions SPI3 had higher correlation values compared to SPI6 and SPI9 (Figure 11), which indicates that SPI3 captures more of the agricultural drought. The performance of the meteorological drought indices varied spatially. In case of South Africa, the correlation values were relatively higher and statistically significant (p-value < 0.1) in the Western Cape and Eastern Cape compared to the Northern Cape of the region. The spatial distribution of the correlation for all meteorological droughts in Ethiopia have similar pattern, where higher and lower correlation values are generally distributed over the north -west and north-east side of the country respectively. The highest correlation between the soil moisture anomalies and meteorological drought indicator are associated with the cropland, which is consistent with [61], who showed that a 3-month SPI has the highest correlation with vegetation growth on croplands of the mid latitude U.S. Great Plains.
Discussion
Highest correlation of SPI3 with the RZSM anomalies indicated that short time meteorological drought represents the agricultural drought better compare to long-term meteorological-based indicators such as SPI6 and SPI9. The impact of meteorological drought on vegetation is cumulative meaning that vegetation does not respond instantaneously to the precipitation changes. The three-month SPI, which captures the precipitation pattern not only for the specific month of the interest, but previous two months as well results in highest correlation between SPI and soil moisture anomalies. On the contrary, the 12 and 6-month SPI values reflect precipitation patterns for annual and the entire growing season respectively and tend to diminish the variance in the precipitation data and smooth the SPI values results in lower correlation values [61]. The relationship between soil moisture and rainfall anomalies were also explored by Sims et al. [62] in the North Carolina where the author suggests that SPI on a scale of 2-3 months yielded highest correlation with soil moisture anomalies.
Our results indicate lower and higher correlation between RZSM anomalies and SPI based indicators in the dry and wet regions respectively, which could be related to the rainfall amount and soil types. The dominant soil types in the wet regions are clay and clay loam, which have higher water holding capacities, which could result in slower response to the rainfall, therefore soil moisture on a specific month would be more dependent on the previous month's rainfall. On the other hand, the arid region shows quicker response of the rainfall anomalies due to dry soil conditions and limited water holding capacity of the sandy soils that covers that region. Therefore, soil moisture in a specific month has a smaller dependence on previous month compare to the wet region [63].
In general, the land cover type has a significant impact on the relationship between RZSM anomalies and other drought indicators. For example, shrub land exhibits lower correlation values compared to the cropland which could be due to the fact that cropland roots are located in the shallow depths and are more sensitive to changes in soil moisture than deeprooted plants such as shrub. Similar observations were made by Camberlin et al. [64] and Huber et al. [65] for Africa, by Li et al.[61] for China and by Wang et al.[66] for the US-American central Great Plains. We also notice a delayed response of NDVI to RZSM anomalies for the shrub land over South Africa which might be related to soil texture and soil moisture amount as most of the shrub land are located in the wet region of the country characterized by more clay soils leading to slower response [67]. This is consistent with the findings of Wang et al. [66] who showed that the NDVI at humid sites takes longer time to response compare to the arid sites.
Conclusions
Soil moisture data are recognized as a fundamental physical variable that can be used to address science and resource-management questions requiring near real-time monitoring of the land-atmosphere boundary, including flood and drought monitoring, and regional crop yield assessment. This study introduced new sets of near real-time global soil moisture data and demonstrates the potential of GEE web-based tools and soil moisture data to assess regional drought condition. In general, meteorological drought indicator, SPI3, gives higher correlation values compared to SPI6 and SPI 9 with the RZSM anomalies. When, comparing the drought characteristics, RZSM anomalies exhibit relatively larger drought duration but smaller drought intensity compared to the meteorological-based drought indicators. The NDVI-RZSM anomalies are influenced by the vegetation cover, specifically shallow rooted plants that are more sensitive to soil moisture changes compared to deep rooted plants. The methods demonstrated here can be applied to other areas requiring early warning of food shortage or improved agricultural monitoring to help provide greater economic security within the agriculture sector.
Incorporating the global soil moisture data into GEE data catalog enables users to efficiently and quickly acquire and process large amount of data. The available tools allow easy analysis, visualization, and interpretation of the data. To this end, these GEE-based tools could enable scientists, policy makers and the general public to explore spatial and temporal variation of soil moisture information and drought conditions for any location in the world with minimal data processing or data management. In addition, all tools are easily transferable and can be used to explore spatial and temporal dynamics of other climate variables such as temperature and evapotranspiration. GEE does not require any additional software installation, which helps to overcome compatibility limitations and allows user to access the available codes and data from any computer connected to the internet. This significantly increases the data and tools usability and applicability. The GEE tools and the soil moisture data are open source and are freely available, which can enable users to use, modify, and suggest future improvements for both the tools and the data.
Although the GEE offers many benefits, it has limitations too. First, it requires basic knowledge of Python and Java script and users with limited programming knowledge might have a steep learning curve. Second, users are sometimes required to export the analyzed results to perform additional analyses due to limited functionalities and plotting options in the GEE. Finally, debugging the code is challenging as the user created algorithms run in the Google cloud distributed over many computers. Despite these limitations, the data distribution and processing approach offered by the GEE platform can be very beneficial, specifically for the developing countries that are typically data poor areas and lack high performance data processing platforms for drought monitoring or crop forecasting. Ingestion of soil moisture data sets to Google Earth Engine. The gray and gold boxes represent inputs and outputs respectively. Data processing steps in the soil moisture exploration routine. The gray and gold boxes represent GEE inputs and outputs respectively. The box identified by dotted line represents the process that run in the GEE server.
Figure 4:
Data processing steps in the drought assessment routine using GEE. The gray and gold box represent GEE inputs and outputs respectively. The box identified by dotted line represents the process that run in the GEE server. Components of Google Earth Engine code editor. Monthly variation of soil moisture and rainfall for different land cover types over South Africa (top) and Ethiopia (bottom).
Figure 9:
Comparison of percentage of months with drought condition, maximum drought duration and drought intensity over Ethiopia and South Africa for multiple drought indices. The center line of each boxplot depicts the median value (50th percentile) and the box encompasses the 25th and 75th percentiles of the sample data. The whiskers extend from q1 − 1.5 × (q3 − q1) to q3 + 1.5 × (q3 − q1), where q1 and q3 are the 25th and 75th percentiles of the sample data, respectively The Pearson's correlation coefficient computed between agricultural-based drought indices and meteorological-based drought indices for different land cover types. | 7,726.6 | 2018-08-11T00:00:00.000 | [
"Environmental Science",
"Agricultural and Food Sciences",
"Computer Science"
] |
Oligomer-prone E57K-mutant alpha-synuclein exacerbates integration deficit of adult hippocampal newborn neurons in transgenic mice
In the adult mammalian hippocampus, new neurons are constantly added to the dentate gyrus. Adult neurogenesis is impaired in several neurodegenerative mouse models including α-synuclein (a-syn) transgenic mice. Among different a-syn species, a-syn oligomers were reported to be the most toxic species for neurons. Here, we studied the impact of wild-type vs. oligomer-prone a-syn on neurogenesis. We compared the wild-type a-syn transgenic mouse model (Thy1-WTS) to its equivalent transgenic for oligomer-prone E57K-mutant a-syn (Thy1-E57K). Transgenic a-syn was highly expressed within the hippocampus of both models, but was not present within adult neural stem cells and neuroblasts. Proliferation and survival of newly generated neurons were unchanged in both transgenic models. Thy1-WTS showed a minor integration deficit regarding mushroom spine density of newborn neurons, whereas Thy1-E57K exhibited a severe reduction of all spines. We conclude that cell-extrinsic a-syn impairs mushroom spine formation of adult newborn neurons and that oligomer-prone a-syn exacerbates this integration deficit. Moreover, our data suggest that a-syn reduces the survival of newborn neurons by a cell-intrinsic mechanism during the early neuroblast development. The finding of increased spine pathology in Thy1-E57K is a new pathogenic function of oligomeric a-syn and precedes overt neurodegeneration. Thus, it may constitute a readout for therapeutic approaches. Electronic supplementary material The online version of this article (10.1007/s00429-017-1561-5) contains supplementary material, which is available to authorized users.
Introduction
The common neuropathological hallmark of a-synucleinopathies, including Parkinson's disease (PD) and dementia with Lewy bodies (DLB), is the deposition of aggregated a-synuclein (a-syn) within affected brain regions, paralleled by neuronal loss (Halliday et al. 2011). The putative function of a-syn has been implicated in regulation of the synaptic vesicle pool and neurotransmitter release due to its presynaptic localization in mature neurons and changes in synaptic transmission in a-syn knockdown and overexpression models (Iwai et al. 1995;Abeliovich et al. 2000;Murphy et al. 2000;Chandra et al. 2004;Nemani et al. 2010). The potential overlap of these functions of a-syn with its pathogenic effects in PD remains elusive (Lashuel et al. 2013). There is an increasing body of evidence showing a high neuronal toxicity of the oligomeric conformation of a-syn, whereas the derived aggregated, fibrillar conformation has been considered less detrimental. Oligomeric a-syn is elevated in the cerebrospinal fluid of PD patients (Tokuda et al. 2010) and its presence precedes neurodegeneration in brains of affected patients (Roberts et al. 2015). In vitro, a-syn oligomers induce toxicity in dopaminergic neuroblastoma cells in a time-and concentration-dependent manner (Danzer et al. 2007). Putative mechanisms of increased cell death include a pore-forming capacity of oligomeric a-syn (Conway et al. 2000;Reynolds et al. 2011). The artificial E57K mutant was previously shown to produce oligomerrelated pathology in rat substantia nigra in vivo (Winner et al. 2011). A transgenic mouse model overexpressing high levels of E57K-mutant a-syn in neurons under control of the murine Thy1-promoter (Thy1-E57K) was recently established (Rockenstein et al. 2014). Compared to mice overexpressing human wild-type a-syn under the same promoter (Thy1-WTS) and to non-transgenic littermates (NTG), Thy1-E57K showed aggravated frontal and hippocampal pathology with regard to neuronal loss, reduction of the presynaptic marker synaptophysin, and context-dependent learning at the age of 8-10 months (Rockenstein et al. 2014). In contrast, the fibrillar conformation of a-syn has been considered less detrimental (Winner et al. 2011).
The integration of newborn neurons during adult neurogenesis is a useful model to study the effects of diseaserelated proteins on spine formation within the ageing brain (Mu et al. 2010). Adult newborn neuron integration was impaired in a previous WTS-transgenic mouse model (Winner et al. 2012). The impact of oligomeric a-syn species on adult newborn neurons is unknown to date. Therefore, in the current study, we characterize the integration deficit of newborn hippocampal neurons comparing the wild-type a-syn (Thy1-WTS) and oligomer-prone a-syn transgenic models (Thy1-E57K) to non-transgenic controls (NTG). We demonstrate neuritic pathology due to transgenic oligomeric a-syn species. This supports the hypothesis that oligomeric a-syn promotes synaptic dysfunction as an early event in PD pathogenesis.
Animals
Animal experiments were conducted in accordance with the European Communities Council Directive of 24th November 1996 and were approved by the local governmental administrations for animal health (animal care use committee of the University of California, San Diego and ''Regierung von Unterfranken'', Würzburg, Az. 55.2-2532.1-45/11). Generation of the WTS-and the E57Ktransgenic mouse lines was described earlier (Rockenstein et al. 2002(Rockenstein et al. , 2014. WTS-transgenic mice overexpress human wild-type a-syn under the regulatory control of the mThy1 promoter (high-expressing line 61). E57K-transgenic mice overexpress human a-syn with an E57K point mutation under control of the same mThy1 promoter (highexpressing line 16). In all experiments, transgenic animals were compared to non-transgenic (NTG) wild-type littermate controls of the same C57BL6/DBA background (n = 6 per group).
BrdU treatment and tissue processing Animals (aged 3 months, n = 5 per genotype) received daily i.p. injections of 5-bromo-2-deoxyuridine (BrdU, 50 mg/kg) for 5 days and were sacrificed after 31 days. Euthanasia with xylazine/ketamine i.p. was followed by transcardial perfusion of animals with PBS followed by 4% paraformaldehyde for tissue fixation. Brains were dissected, postfixed for 6 h in 4% paraformaldehyde, and stored in 30% sucrose in 0.1 M phosphate buffer at 4°C. 40 lm-thick brain sections were obtained on a sliding microtome and were stored in cryoprotectant solution (25% ethylene glycol, 25% glycerol in 0.1 M phosphate buffer) at -20°C. As BrdU labeling studies were conducted separately for Thy1-WTS and Thy1-E57K, NTG littermate controls were included in each experiment.
Retrovirus-mediated labeling and analysis of newborn neurons
A Moloney murine leukemia retrovirus-based CAG-GFP plasmid was used as described earlier (Zhao et al. 2006). CAG-GFP drives the expression of enhanced green fluorescent protein (GFP) by the compound promoter CAG. A concentrated viral solution was titrated to 4 9 10 8 pfu/ml. Mice were anaesthetized using a weight-adjusted i.p. dose of xylazine/ketamine and a stereotaxic frame (Kopf Instruments) was used for sequential bilateral infusion into the dentate gyrus (AP -2.00 mm, ML ± 1.6 mm from bregma, DV -2.3 mm from skull) of transgenic mice (WTS and E57K) and respective controls (n = 6 per group). A total volume of 1 ll was slowly infused (0.2 ll/ min) followed by wound closure and a survival period of 31 days.
Microscopy
All counting procedures were performed on blind-coded slides. Recordings were performed on a fluorescence microscope (Observer.Z1, Zeiss) and on a confocal laser scanning microscope (LSM710, Zeiss) using the ZEN black software. For dendrite growth analyses, on average, four GFP-positive newborn neurons in the dentate gyrus of each animal were imaged resulting in a cell number of 24 per group. For each neuron, z-series of antibody-enhanced GFP-signal at 1.5 lm were acquired spanning the whole extent of the neuron within the section. Maximum intensity projections were then analyzed with ImageJ and NeuronJ. Spine recordings were performed on unstained mounted sections to preserve signal intensity. We chose dendritic segments in the molecular layer, but not in the granule cell layer (GCL) for spine imaging. The estimated surface area of each spine was calculated as 0.785 9 D major 9 D minor , with D major as the biggest diameter and D minor as the smallest diameter of the respective spine. Mushroom spines were defined by their average estimated surface area from three measurements of at least 0.4 lm 2 (Zhao et al. 2014).
For cell number and volume quantifications, every sixth section of the hippocampus was analyzed and values were multiplied by 6. For the differentiation analysis, 50 BrdUpositive cells were analyzed in the dentate gyrus of each animal; cells were randomly selected and analyzed by moving through the z-axis of each cell to exclude falsepositive double labeling. Total numbers of newborn neurons were determined by multiplication of the total number of BrdU-positive cells by the ratio of BrdU/NeuN-positive cells. For the quantification of cell death, activated Cas-pase3 (aCaspase3) positive cells were counted in the granule cell layer and CA3 region of every 12th section. NeuN ? cells of the granule cell layer and the CA3 region were quantified within a randomly placed, 150 lmwide counting frame in every 12th section and total cell numbers were estimated based on the ratio of the total area.
Statistical analysis
For statistical analysis with Prism (GraphPad Software), the significance level was set at P \ 0.05. All parameters were compared using the two-sided student's t test (regarding cell count, volume, dendrite length, number of branching points, spine density, and mushroom spine density) or a one-way ANOVA followed by a Tukey's multiple comparison post hoc test (for the comparison of PDGF-WTS, Thy1-WTS, and Thy1-E57K as well as for the comparison of neuron numbers of NTG, Thy1-WTS, and Thy1-E57K).
Impaired post-synaptic integration of newborn neurons in Thy1-WTS transgenic animals
We first analyzed the morphology of newborn hippocampal neurons in Thy1-WTS by retroviral labeling of dividing cells with GFP. Analysis was performed 1 month postinjection (Fig. 1a). Total dendritic length and the number of branching points of newborn neurons were not significantly changed when compared to NTG (see Table 1 for detailed results; Fig. 1b, c). The overall density of spines was unchanged when compared to NTG (Fig. 1f), but there was a significant reduction of mushroom spines (Fig. 1g)-a feature that has been shown before in PDGF-WTS and that indicates impaired spine maturation and postsynaptic integration of adult newborn neurons (Winner et al. 2012).
Oligomer-prone E57K a-syn exacerbates integration deficit of newborn neurons
We previously showed a reduction of synaptic markers in the hippocampus of Thy1-E57K mice (Rockenstein et al. a Experimental paradigm: CAG-GFP retrovirus was delivered to the hippocampus of 4-month-old animals and analysis was performed 1 month later. b Dendrite length was unchanged between NTG and Thy1-WTS mice. c Number of branching points was unchanged between NTG and Thy1-WTS mice. d, e Representative micrographs of GFP-labeled dendrites (upper line, scale bar 25 lm) and spines (lower line, scale bar 10 lm; arrows indicate mushroom spines) in NTG and Thy1-WTS mice. f Density of all spines was unchanged between NTG and Thy1-WTS. g Density of mushroom spines was significantly reduced in Thy1-WTS; *P \ 0.05 Table 1 Analysis of neurite morphology of adult newborn neurons in human wild-type a-syn transgenic animals (Thy1-WTS), human E57Kmutant a-syn transgenic animals (Thy1-E57K), and respective non-transgenic controls (NTG) Numbers are given as mean ± SD and P values when compared to respective NTG 2014) when compared to NTG and to Thy1-WTS. We thus analyzed the morphology of newborn neurons in Thy1-E57K by retroviral labeling (see Table 1 for detailed results; Fig. 2a). Similar to Thy1-WTS, we observed no outgrowth deficit regarding dendrite length and number of branching points (Fig. 2b, c). However, the overall density of spines was significantly reduced in Thy1-E57K (Fig. 2f). In addition, there was a significant reduction in mushroom spines (Fig. 2g), to a greater extent than what was observed in Thy1-WTS. In summary, the density of dendritic mushroom spines was reduced both in Thy1-WTS and in Thy1-E57K, whereas E57K a-syn had an additional strong negative effect on the density of all dendritic spines. Cell-extrinsic a-syn in the molecular layer impairs mushroom spine density of newborn neurons in PDGF-WTS (Winner et al. 2012). We, therefore, analyzed the spatial relation of a-syn in the dendritic compartment of newborn neurons of Thy1-WTS and of Thy1-E57K. E57K a-syn was found directly adjacent to GFP-labeled newborn neurons including dendritic shaft, thin spines, and mushroom spines (SFig. 1). Within the dendrites and spines of newborn neurons, however, transgenic a-syn signal was at low levels in both Thy1-WTS and Thy1-E57K, consistent with the use of the same promoter. These data suggest that transgenic a-syn-as specifically labeled by the humanspecific 15G7 a-syn antibody-within the axon terminals of the perforant path may impair mushroom spine density.
Promoter-dependent influence of transgenic a-syn on adult neurogenesis
In light of the spine alterations in Thy1-WTS and Thy1-E57K, we next analyzed proliferation and survival of newborn cells. In Thy1-WTS, the number of proliferating Fig. 2 Impaired overall spine density in Thy1-E57K mice. a Experimental paradigm: CAG-GFP retrovirus was delivered to the hippocampus of 4-month-old animals; analysis was performed 1 month later. b Dendrite length was unchanged between NTG and Thy1-E57K mice. c Number of branching points was unchanged between NTG and Thy1-E57K mice. d, e Representative micrographs of GFP-labeled dendrites (upper line, scale bar 25 lm) and spines (lower line, scale bar 10 lm; arrows indicate mushroom spines) in NTG and Thy1-E57K mice. f Density of all spines was significantly reduced in Thy1-E57K. g Density of mushroom spines was significantly reduced in Thy1-E57K; **P \ 0.01, ***P \ 0.001 PCNA-positive cells in the subgranular zone of the hippocampal dentate gyrus was unchanged when compared to NTG (see Table 2 for detailed results; Fig. 3b). Likewise, the numbers of DCX-positive neuroblasts were unchanged in Thy1-WTS (Fig. 3c). We performed BrdU-labeling of newborn neurons at the age of 4 months and analyzed survival after 1 month. Thy1-WTS did not show differences in total numbers of BrdU-positive cells (Fig. 3d). There was no change in the ratio of neuronal differentiation ( Table 2). The calculated numbers of BrdU-/NeuN-positive newborn neurons were also unchanged in Thy1-WTS (Fig. 3e). In summary, proliferation and survival of adult hippocampal newborn neurons were unchanged in the Thy1-WTS transgenic mouse model.
Oligomer-prone E57K a-syn previously showed enhanced neuronal toxicity when compared to WTS (Winner et al. 2011). In addition, the overall number of hippocampal NeuN-positive neurons was reduced in Thy1-E57K mice (Rockenstein et al. 2014). We thus analyzed adult newborn neuron proliferation and survival in the Thy1-E57K model. We found no differences when compared to NTG. In detail, proliferation was unchanged (see Table 2 for detailed results; Fig. 3f), the number of DCXpositive neuroblasts was unchanged (Fig. 3g), the total number of BrdU-positive cells was unchanged (Fig. 3h), the ratio of neuronal differentiation was unchanged (Table 2), and the calculated total number of newborn neurons was unchanged (Fig. 3i). For an analysis of young adult neuroblasts, i.e., newborn cells during their first 2 weeks of neuronal maturation, we analyzed the different morphological subtypes of DCX-positive cells (SFig. 2ad). We observed a reduction of the number of late-stage neuroblasts in Thy1-WTS, but there was no difference of young and intermediate neuroblasts in Thy1-WTS and in Thy1-E57K (Table 2, SFig. 2). Taken together, adult neurogenesis is neither affected by WTS nor by E57K in the Thy1-transgenic model.
Since a significant loss of neurons was reported in the CA3 region of 8-10 months old Thy1-WTS and Thy-E57K mice (Rockenstein et al. 2014), we analyzed neuronal loss at the age of 4 months. Quantifying activated Caspase3 (aCaspase3)-positive cells, the total number of neurons, and the volume for both the granule cell layer and the CA3 region, we found no significant differences in a-syn transgenic mice (STable 1, SFig. 3). This indicates that in the adult hippocampus of Thy1-WTS and Thy1-E57K mice, apoptosis-mediated neurodegeneration occurs after 4 months of age.
In light of these observations of unaffected hippocampal neurogenesis in Thy1-promoter-based a-syn models, we statistically compared the current findings to previously published quantifications of adult neurogenesis in PDGF-WTS by Winner et al. (2004) which were conducted using the same paradigm. When normalized to respective NTG, there is a significant promoter dependence of the effects of transgenic a-syn on the numbers of DCX-positive neuroblasts (Fig. 3k), BrdU-positive cells (Fig. 3l), and newborn neurons (Fig. 3m).
In summary, whereas the PDGF-WTS model shows a pronounced defect of hippocampal proliferation and neurogenesis at 4 months, these parameters remain unchanged in the Thy1-WTS and the Thy1-E57K models, but there is an integration phenotype.
Late transgenic expression of WTS and oligomerprone E57K a-syn in the Thy1-model
We next addressed how the temporal and spatial expression patterns of the Thy1-promoter and the PDGF-promoter were different. In the hippocampus of adult PDGF-WTS, we have previously shown transgene expression in Sox2positive neural stem cells, DCX-positive neuroblasts and NeuN-positive neurons (Winner et al. 2012). In Thy1-WTS, a-syn was detected neither in Sox2-positive adult hippocampal stem cells (Fig. 4a) nor in DCX-positive Numbers are given as mean ± SD and P values when compared to respective NTG. P value in bold indicates statistically significant difference neuroblasts (Fig. 4c). Expression was detected in NeuNpositive cells along with strong protein expression in the hilus and in the molecular layer (Fig. 4e). Similarly, in Thy1-E57K, transgenic a-syn was not expressed in Sox2positive stem cells (Fig. 4b) and DCX-positive neuroblasts (Fig. 4d). Expression in the somal compartment of dentate granule cells was weak, whereas highest expression was found in the molecular layer and the hilus (Fig. 4f). In conclusion, other than in PDGF-WTS, where transgenic a-syn is present at all stages of newborn neuron development, in Thy1-WTS and Thy1-E57K, the transgene is not present at the stem cell and neuroblast stages, but has a strong overall expression in the adult hippocampus (Fig. 4g). We additionally confirmed the presence of transgenic a-syn by Western blot of the dissected hippocampus of NTG, Thy1-E57K, and Thy1-WTS (Fig. 4h). As expected, the total amount of a-syn was increased in Thy1-WTS and Thy1-E57K when compared to NTG (Fig. 4i). Abundant a-syn oligomers were present in Thy1-E57K, whereas (Winner et al. 2004). Shown are relative changes of adult neurogenesis. For all groups, the respective NTG values were set at 100%. All three mouse models showed no changes of PCNA-positive cells (j). When compared to Thy1-WTS and Thy1-E57K and normalized to respective NTG, significant reductions are found in PDGF-WTS for the numbers of DCX-positive cells (k), BrdU-positive cells (l), and BrdU/NeuN double-positive cells (m); *P \ 0.05, **P \ 0.01 Fig. 4 Low intrinsic, but high extrinsic transgene expression in adult neuroblasts of Thy1-WTS and Thy1-E57K mice. a-f Colocalization analysis of transgenic a-syn at different stages of adult newborn neuron development. a, b Sox2-positive stem cells (arrows) were negative for transgenic a-syn in Thy1-WTS and Thy1-E57K. c, d DCX-positive hippocampal neuroblasts (arrows) were only partly co-labeled with a-syn antibody in Thy1-WTS and Thy1-E57K. e, f Expression of transgenic a-syn in the dentate gyrus was mainly confined to mature, NeuN-positive granule cells. High expression was noted in the hilus and in the molecular layer. GL granule cell layer, S subgranular zone, H hilus. g Model of the temporal expression pattern of transgenic a-syn under the control of the PDGF-and Thy1promoters. h Representative western blot and i analysis of the levels of a-syn in the hippocampus, showing that highest expression levels of monomeric a-syn (14 kDa) are found in Thy1-WTS, whereas dimers (28 kDa) and higher molecular weight oligomers ([ 42 kDa) are predominantly present in Thy1-E57K. Scale bars 25 lm Thy1-WTS showed a significant increase in monomeric a-syn when compared to NTG.
Discussion
In the current work, we compared the effect of wild-type and oligomerizing a-syn on adult hippocampal neurogenesis. To this end, we analyzed adult hippocampal neurogenesis in two transgenic mouse models of asynucleinopathy, overexpressing WTS and E57K-mutant a-syn under control of the Thy1 promoter. We found that in 1-month-old newborn neurons, transgenic WTS reduced mushroom spine density and transgenic E57K-mutant a-syn additionally reduced the density of all spines. In both transgenic groups, adult neurogenesis was unaffected in terms of numbers of proliferating and surviving cells, and we observed no overt neurodegeneration in the dentate gyrus. Furthermore, comparison with PDGF-WTS neurogenesis data shows that the effect of transgenic a-syn on adult cellular plasticity is promoter-dependent and may be related to absence of Thy1-regulated a-syn expression in neural stem/progenitor cells and neuroblasts. These data suggest that the effect of a-syn on adult neurogenesis depends on cell-autonomous expression in neural stem/ progenitor cells and that reduction of post-synaptic spine density may constitute an early pathogenic function of oligomeric a-syn.
Mushroom spine loss of newborn neurons as a common phenotype of a-syn transgenic mice
In the current study, mushroom spine density of newborn neurons was significantly reduced in Thy1-WTS and Thy1-E57K, similar to our previous observation in PDGF-WTS (Winner et al. 2012). Mushroom spines have been suggested to mediate particularly strong and stable synaptic input, based on their large head size, the enrichment in F-actin, and their relatively low motility (Sala 2002;Kasai et al. 2003). A-syn, in turn, impairs microtubule-dependent cytoskeleton changes (Prots et al. 2013). As overall neurite morphology is rather fixed in the late-stage newborn neuron development, late expression of Thy1-regulated a-syn may thus specifically impair dendritic spines, whereas dendrite length and dendritic branching remain unaffected. In light of the reduction of overall hippocampal synaptophysin at 8-10 months (Rockenstein et al. 2014), we suppose that the mushroom spine reduction persists or aggravates at later time points after cell birth. However, we cannot exclude a slowdown of mushroom spine maturation with normalized densities at later time points, because spine motility is highest from 1 to 2 months after cell birth (Zhao et al. 2006). Interestingly, in a mouse model overexpressing A30P-mutant a-syn under control of the Thy1 promoter, spine formation on adult-born granule cells of the olfactory bulb was also compromised beginning 3-4 weeks after labeling which was related to the critical time point of spine formation (Neuner et al. 2014).
Novel pathogenic effect of oligomer-prone a-syn on spine density of newborn neurons Overall spine density of newborn neurons was intact in Thy1-WTS, but severely reduced in Thy1-E57K. As we analyzed two analogous transgenic mouse models, we conclude that oligomer-prone a-syn exacerbates spine pathology of newborn neurons. Reduction of overall synaptophysin in the hippocampus and loss of hippocampal NeuN-positive mature neurons is more severe in Thy1-E57K than in Thy1-WTS at the age of 8-10 months (Rockenstein et al. 2014). Our analogous analysis at 4 months revealed unchanged hippocampal neuron numbers, suggesting that a-syn oligomerization-mediated mushroom spine pathology of newborn neurons precedes overt neurodegeneration. This seems to be a time-and dose-dependent effect, since the spatial expression pattern of transgenic a-syn in the granule cell layer and the hilus was similar when comparing 4-month-old (Fig. 4) and 8-10-month-old (Rockenstein et al. 2014) animals.
In post-mortem tissue of DLB cases, high amounts of aggregated a-syn were found in the presynaptic compartment along with loss of postsynaptic dendritic spines, suggesting that presynaptic a-syn might be a trigger of functional impairment (Kramer and Schulz-Schaeffer 2007;Burke and O'Malley 2013). In line with this observation, transgenic a-syn did not substantially colocalize with newborn neurons' dendrites in Thy1-WTS and Thy1-E57K (Rockenstein et al. 2014;SFig. 1). Transgenic a-syn may thus be mainly present within axon terminals of the perforant path. Indeed, axonal pathology and dysregulation of axonal transport proteins precede dopaminergic neuron loss in an AAV-model of synucleinopathy (Chung et al. 2009). Thus, spine loss of newborn neurons may represent an early feature of pathology in the Thy1-E57K model and might serve as a marker of disease progression.
A-syn spreads among neuronal circuits leading to the continuous propagation of pathology in a-synucleinopathies (Desplats et al. 2009;Hansen et al. 2011;Luk et al. 2012). Oligomeric a-syn is more disposed to propagation, which may explain increased pathology in Thy1-E57K (Peelaerts et al. 2015). Extracellular presence of oligomeric a-syn in acute hippocampal slices impaired long-term potentiation in the CA1 pyramidal synapse and increased basal synaptic transmission (Diógenes et al. 2012). Another study on hippocampal neurons showed that extracellular oligomeric a-syn amplifies glutamate-induced toxicity (Hüls et al. 2011). High levels of a-syn oligomers are present in the hippocampus of Thy1-E57K mice (Fig. 4) and may thus elicit similar excitotoxic effects.
Most dendritic spines of adult-born neurons integrate by competing for the existing synapses, indicating that spineformation is activity-driven (Toni et al. 2007). Moreover, about 1 month after neuronal birth, long-term potentiation is facilitated by increased potentiation amplitude and decreased induction thresholds (Ge et al. 2007). Accordingly, spine formation may be compromised in Thy1-E57K mice due to a reduction of presynaptic signaling. Reduced neurotransmitter release has, indeed, been shown in a-syn models and loss of hippocampal synaptophysin in Thy1-E57K is an indirect sign of decreased synaptic input (Rockenstein et al. 2014). Neurotransmitter release was impaired upon a-syn overexpression in primary hippocampal and midbrain neurons and in hippocampal slices from a-syn transgenic mice (Nemani et al. 2010). Changes in vesicle release might be caused by a direct effect of a-syn on SNARE proteins and vesicle priming (Chandra et al. 2005;Larsen et al. 2006). Alternatively, we cannot exclude low levels of propagation of E57K a-syn into newborn neurons or cell-intrinsically expressed E57K a-syn which may have contributed to the observed spine loss in a cell-autonomous manner.
Early expression of transgenic a-syn is necessary to impair adult neuronal survival Our data of late transgene expression under regulation of the Thy1 promoter are compatible with reports, showing that promoter activity of Thy1 is absent during embryonic development, has an onset around birth, and reaches a plateau 1 month postnatally (Aigner et al. 1995;Caroni 1997;Lüthi et al. 1997;Wiessner et al. 1999;Kahle et al. 2001). The PDGFb-promoter, on the other hand, is active already during the embryonic development (Sasahara et al. 1991(Sasahara et al. , 1992. In the PDGF-WTS model, we have previously shown that PDGFb-promoter-driven a-syn is also expressed in adult hippocampal stem cells and neuroblasts (Winner et al. 2012), contrasting to our current results in the Thy1-models. Therefore, our suggested expression kinetics (Fig. 4g) are based on direct expression data in neuroblasts together with the correlation to embryonic development. These results are corroborated by the analysis of two independently generated transgenic mouse models and by analogous results from the expression of transgenic a-syn in the subventricular zone and olfactory bulb of Thy1-WTS mice (Schreglmann et al. 2015). We observed a significant reduction of the late-stage neuroblasts in Thy1-WTS (SFig. 2h). However, due to unchanged dendrite length and unchanged BrdU-positive cell numbers in Thy1-WTS, a delay of maturation or a major loss of late-stage neuroblasts is unlikely. Given the discrepancy between Thy1-and PDGF-promoter-based mouse models in the adult neurogenic niche, we suggest that cell-autonomous overexpression of a-syn during the stem-and progenitor cell state is necessary to impair survival of their progeny.
Indeed, the matter of temporo-spatial promoter regulation is well known from many studies of adult neurogenesis in transgenic models of Alzheimer's disease (Mu and Gage 2011). Different types of transgenic mouse models overexpressing human amyloid precursor protein (hAPP) showed a reduction of proliferation and newborn neuron survival in the adult hippocampus (Haughey et al. 2002;Crews et al. 2010). However, lack of cell-autonomous hAPP-expression within adult neural stem/progenitor cells spared adult neurogenesis (Yetman and Jankowsky 2013). Our data thus argue for a deleterious effect of WTS within immature adult neural progenitors rather than a developmental defect. For this reason, the PDGF-WTS mouse model provides a recapitulation of a-syn-induced stem cell pathology. However, for the matter of progressive spine loss followed by neurodegeneration, Thy1-based models seem to be of choice. | 6,195 | 2017-11-09T00:00:00.000 | [
"Biology"
] |
Radiosynthesis and in Vivo Evaluation of Two PET Radioligands for Imaging α-Synuclein
Two α-synuclein ligands, 3-methoxy-7-nitro-10H-phenothiazine (2a, Ki = 32.1 ± 1.3 nM) and 3-(2-fluoroethoxy)-7-nitro-10H-phenothiazine (2b, Ki = 49.0 ± 4.9 nM), were radiolabeled as potential PET imaging agents by respectively introducing 11C and 18F. The syntheses of [11C]2a and [18F]2b were accomplished in a good yield with high specific activity. Ex vivo biodistribution studies in rats revealed that both [11C]2a and [18F]2b crossed the blood-brain barrier (BBB) and demonstrated good brain uptake 5 min post-injection. MicroPET imaging of [11C]2a in a non-human primate (NHP) confirmed that the tracer was able to cross the BBB with rapid washout kinetics from brain regions of a healthy macaque. The initial studies suggested that further structural optimization of [11C]2a and [18F]2b is necessary in order to identify a highly specific positron emission tomography (PET) radioligand for in vivo imaging of α-synuclein aggregation in the central nervous system (CNS).
Introduction
Although Parkinson's disease (PD) is a degenerative neurological disorder characterized by motor symptoms, it is also known to be closely associated with dementia [1]. The primary neuropathologic change in PD, the degeneration of dopaminergic neurons, occurs in the substantia nigra, accompanied by Lewy bodies (LB) and Lewy neurites (LN). To date, the pathogenic mechanism of PD is not fully understood [2]. The diagnosis of PD is primarily based on the clinical symptoms, such as resting tremor, bradykinesia and rigidity. Because the current treatment for PD is to minimize the disease symptoms in the patients [1,3], a method of diagnosing PD at a very early stage would greatly help physicians to design the therapy accordingly.
α-Synuclein (α-syn) is a presynaptic terminal protein that consists of 140 amino acids; the aggregation of α-syn is considered the pathological hallmark of PD. α-Syn plays an important role in the central nervous system (CNS) in synaptic vesicle recycling; it also regulates the synthesis, storage and release of neurotransmitters [4]. It is specifically upregulated in a discrete population of presynaptic terminals of the brain during acquisitionrelated synaptic rearrangement [5]. α-Syn naturally exists in a highly soluble, unfolded state [6,7]. However, in PD brains, insoluble aggregation of misfolded fibrillar α-syn occurs in LB and LN, which may cause synaptic dysfunction and neuronal cell death [8][9][10][11]. Positron emission tomography (PET) is a non-invasive imaging modality that can provide the functional information of a living subject at the molecular and cellular level. Current diagnostic PET radioligands for PD target either the dopaminergic system (pre-synaptic and post-synaptic dopamine activity) or vesicular monoamine transporter type 2 (VMAT2) [12,13]. Unfortunately, such imaging strategies have difficulty in distinguishing PD from other parkinsonian syndromes that also result in the degeneration of nigrostriatal projections [14,15]. In addition, dopaminergic medications used for symptomatic treatment may alter striatal uptake of these agents, limiting their reliability for measuring disease progression [16]. In contrast, α-syn is a valuable imaging target for PD, because fibrillar α-syn deposition in LB and LN distinguishes PD from other disorders and is the defining feature for post-mortem pathologic diagnosis. Thus, a small molecular PET radiotracer with high affinity and selectivity to fibrillar α-syn protein could be used to quantify the level of α-syn aggregation non-invasively. This will not only improve the diagnostic accuracy of PD, but also provide a tool to improve the understanding of disease progression and monitor the therapeutic efficacy in clinical trials.
Our group previously reported the syntheses of a class of tricyclic analogues and their in vitro binding affinities towards α-syn fibrils; several lead compounds were identified with moderate affinities for α-syn fibrils (K i < 70 nM) (Figure 1, 2a, 2b) [17]. Compounds 2a and 2b also displayed favorable binding selectivity to α-synuclein aggregation compared to Aβ and tau protein: for 2a, K i-α-syn /K i-Aβ > 3-fold and K i-α-syn /K i-tau > 4-fold; for 2b, K i-α-syn /K i-Aβ = 2.1-fold and K i-α-syn /K i-tau = 2.5-fold [18]. The radioiodinated ligand, [ 125 I]1, was synthesized to establish a methodology for screening the α-syn fibril binding affinity of new ligands using a competition binding assay [18]. The affinities for 2a and 2b were determined using this [ 125 I]1 assay, and the resulting K i values (66.2 nM for 2a, 19.9 nM for 2b) were in the same range as the values obtained by the Thioflavin T assay. The 125 I competition assay further confirmed the previously determined in vitro potency of 2a and 2b, which were developed as potential PET radioligands to be radiolabeled by 11 [17] with necessary modification.
Radiosynthesis of [ 11 C]2a
: Approximately 1.2 mg of Precursor 4 was placed in the reaction vessel, and 0.20 mL of DMF was added, followed by 3.0 μL of 5 N NaOH. The mixture was thoroughly mixed on a vortex for 30 s. A stream of [ 11 C]CH 3 I in helium was bubbled for 3 min into the reaction vessel. The sealed vessel was heated at 90 °C for 5 min, at which point the vessel was removed from heat and 20 μL 1,8-diazabicyclo [5.4.0]undec-7ene (DBU) in 50 μL DMF was added via syringe. The reaction mixture was heated at 90 °C for 7 min (Scheme 2); then, the reaction was quenched by adding 1.7 mL of the HPLC mobile phase, which was composed of acetonitrile/0.1 M ammonium formate buffer (60/40, v/v) and pH = 4.5. The diluted solution was purified by high performance liquid chromatography (HPLC) by injection on a Phenomenex Luna C18 reverse phase column (9.4 × 250 mm) using the mobile phase described above. The radiolabeled product was eluted using a flow rate of 4.0 mL/min, and the UV wavelength was set as 254 nm. Under these conditions, the retention time of Precursor 4 was ∼7 min; the retention time of [ 11 C]2a was ∼16 min. [ 11 C]2a was collected in a vial containing 50 mL Milli-Q water, which was then passed through a Sep-Pak Plus C18 cartridge (Waters, Milford, MA, USA). The trapped product was eluted with ethanol (0.6 mL) followed by 5.4 mL 0.9% saline. After sterile filtration, the final product was ready for quality control (QC) analysis and animal studies. QC was performed on a Phenomenex Prodigy C18 reverse phase analytic HPLC column (250 mm × 4.6 mm, 5 μA) and UV detection at a 254 nm wavelength. The mobile phase was acetonitrile/0.1 M ammonium formate buffer (80/20, v/v) using a 1.5 mL/min flow rate. Under these conditions, the retention time of [ 11 C]2a was 4.82 min. The radioactive dose was authenticated by co-injection with the cold standard Compound 2a. Radiochemical purity was >99%; the chemical purity was >95%; the labeling yield was 35%-45% (n = 4, decay corrected to EOB), and the specific activity was >363 GBq/μmol (decay corrected to EOB, n = 4).
Radiosynthesis of [ 18 F]2b:
The eluted solution formed two phases: the top ether phase was transferred out, and the bottom aqueous phase was extracted with another 1-mL aliquot of ether. The combined ether extracts were passed through a set of two sodium sulfate Sep-Pak Plus dry cartridges into a reaction vessel. After ether was evaporated with a nitrogen stream at 25 °C, 1.0 mg of Precursor 4 was dissolved in 200 μL DMSO and was transferred to a vial containing 1-2 mg Cs 2 CO 3 . After vortexing for 1 min, the Cs 2 CO 3 saturated solution was added to the reaction vessel containing the dried radioactive [ 18 F]/F − . The tube was capped and briefly vortexed and then kept at 90 °C for 15 min. Ten microliters of 1,8-diazabicyclo [5.4.0]undec-7-ene (DBU) in 50 μL DMSO was added via syringe. The reaction mixture was heated at 90 °C for 15 min. The mixture was subsequently diluted with 3 mL of the HPLC mobile phase (50/50 acetonitrile/0.1 M formate buffer, pH = 4.5) and purified using a Semi-Prep HPLC system for purification. The HPLC system contains a 5-mL injection loop, an Agilent SB C-18 column, a UV detector at 254 nm and a radioactivity detector. At a 4.0 mL/min flow rate, the retention time of the product was 19-21 min, whereas the retention time of the precursor was 8-9 min. After the HPLC collection and being diluted with 50 mL sterile water, the product was trapped on a C18 Sep-Pak Plus cartridge. The product was eluted with ethanol (0.6 mL) followed by 5.4 mL 0.9% saline.
After sterile filtration, the final product was ready for quality control (QC) analysis and animal studies. An aliquot was assayed by analytical HPLC (Grace Altima C18 column, 250 × 4.6 mm), UV at 276 nm, mobile phase of acetonitrile/0.1 M ammonium formate buffer (71/29, v/v), pH = 4.5. Under these conditions, the retention time for [ 18 F]2b was approximately 4.9 min at a flow rate of 1.5 mL/min. The sample was authenticated by coinjecting with the cold standard 2b solution. The radiochemical purity was >98%; the chemical purity was >95%; the labeling yield was 55%-65% (n = 4, decay corrected), and the specific activity was >200 GBq/μmol (decay corrected to EOB, n = 4).
Biodistribution Studies
All animal experiments were conducted in compliance with the Guidelines for the Care and The whole brain was quickly removed and dissected into segments consisting of brain stem, thalamus, striatum, hippocampus, cortex and cerebellum. The remainder of the brain was also collected to determine total brain uptake. At the same time, samples of blood, heart, lung, muscle, fat, pancreas, spleen, kidney, liver (and bone for [ 18 F]2b) were removed and counted in a Beckman Gamma 8000 well counter with a standard dilution of the injectate. Tissues were weighed, and the percent injected dose (%ID)/gram for each tissue was calculated.
MicroPET Brain Imaging Studies of [ 11 C]2a in Cynomolgus Macaque
Following the initial evaluation in rats, the washout kinetics and ability of [ 11 C]2a to cross the blood-brain barrier in a non-human primate (NHP) was determined on an adult male cynomolgus macaque (6-8 kg) using a microPET Focus 220 scanner (Concorde/CTI/ Siemens Microsystems, Knoxville, TN). Before each scan (n = 2), the animal was fasted for 12 h and then initially anesthetized with ketamine (10 mg/kg) and glycopyrrolate (0.13 mg/kg) intramuscularly. Upon arrival at the scanner, the monkey was intubated, and a percutaneous catheter placed for tracer injection. The head was positioned supine in the adjustable head holder with the brain in the center of the field of view. Anesthesia was maintained at 0.75%-2.0% isoflurane/oxygen and the core temperature maintained at 37 °C. A 10-min transmission scan was performed to confirm positioning; this was followed by a 45-min transmission scan for attenuation correction. Subsequently, a 2-h dynamic emission scan was acquired after venous injection of 300-370 MBq of [ 11 C]2a.
Chemistry
Compounds 2a and 2b possess a methoxy and fluoroethoxy group, respectively, enabling radiolabeling through O-alkylation of the corresponding phenol precursor. However, to avoid undesired N-alkylation product, the acetyl protected Compound 4 was used as the precursor for the radiosyntheses. As shown in Scheme 1, the synthesis of 4 was accomplished by a two-step strategy starting from 2a following our previous procedure [17]. N-acetylation of 2a was achieved using acetyl chloride. Removing the O-methyl group of 3 with boron tribromide afforded the phenol Precursor 4, which was used in the radiosyntheses of 2a and 2b. Due to the reaction scale difference, the yields for certain reactions differ slightly from our previous report.
Radiochemistry
The radiosynthesis of [ 11 C]2a was accomplished by a two-step approach. The reaction of the phenol Precursor 4 with [ 11 C]CH 3 I was performed in DMF in the presence of NaOH [19][20][21], and the N-acetyl group of the 11 C-labeled intermediate was removed by DBU following the literature procedure [22], as outlined in Scheme 2. [ 11 C]2a was obtained in approximately 35%-45% overall radiochemical yield (RCY) after HPLC purification (n = 4). The radiochemical purity of [ 11 C]2a was >99% and chemical purity was >95%. [ 11 C]2a was identified by co-eluting with the solution of standard 2a. The entire synthetic procedure, including the production of [ 11 C]CH 3 I, radiosynthesis, HPLC purification and formulation of the radiotracer for animal studies, was completed within 50-60 min. [ 11 C]2a was obtained in a specific activity of >363 GBq/μmol at EOB (n = 4).
Biodistribution in Rats
The radioactivity distribution in various organs after the injection of [ 11 C]2a and [ 18 F]2b in rats is summarized in Table 1. Both radiotracers displayed homogeneous distribution in the brain regions as shown in Figure 2A,B. For [ 11 C]2a, the total brain uptake (%ID/gram) of radioactivity at five, 30 and 60 min post injection were 0.953 ± 0.115, 0.287 ± 0.046 and 0.158 ± 0.014 respectively; in the peripheral tissues, liver had the highest uptake among the tissues analyzed; the uptake (%ID/gram) in liver reached 2.198 ± 0.111 at 5 min and 1.116 ± 0.024 at 60 min. For [ 18 F]2b, the total brain uptake (%ID/gram) at five, 30, 60 and 120 min was 0.758 ± 0.013, 0.465 ± 0.018, 0.410 ± 0.030 and 0.359 ± 0.016, respectively; At 5 min post-injection, this compound also has a high liver uptake: (%ID/gram) of 1.626 ± 0.221. However, after 30 min, the kidney has retained the highest radioactivity of all tissues that were analyzed. The bone uptake (%ID/gram) was very stable and no defluorination was observed for [ 18 F]2b. More importantly, the ex vivo rat biodistribution data revealed that both compounds readily crossed the BBB and entered the brain. Both tracers exhibit high initial brain uptake and appropriate washout kinetics in the brain of normal rats. Rapid clearance of the radioactivity for both [ 11 C]2a and [ 18 F]2b was observed from brain, as well as other organs, such as lung, pancreas, spleen, kidney and liver. However, [ 11 C]2a showed faster wash-out kinetics than [ 18 F]2b, as shown in Figure 2. [ 11 C]2a was chosen for subsequent microPET evaluation in an NHP.
MicroPET Studies in NHP
The representative summed images from zero to 120 min were co-registered with MRI images to accurately identify the regions of interest (Figure 3). The time-activity curve (TAC) revealed high initial uptake of [ 11 C]2a in the brain, which peaked at 3 min postinjection; then, the radioactivity was quickly washed out from all brain regions. The summed image revealed a homogeneous distribution of radioactivity in the brain of the normal cynomolgus macaque. The microPET studies suggested that [ 11 C]2a was able to cross the BBB and had a fast washout kinetics from the brain regions. The macaque used in the studies was a healthy young adult, and the distribution of the α-syn radioligand throughout the brain regions was homogeneous. Higher expression of α-syn protein in particular regions should not be observed in healthy subjects; thus, homogeneous distribution of the radioactivity in the macaque brain was expected. Nevertheless, PET studies of [ 11 C]2a performed on an NHP model bearing the over-expression of α-syn aggregation will directly determine the in vivo specificity of the radiotracer. | 3,515.4 | 2014-03-17T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Active module identification in intracellular networks using a memetic algorithm with a new binary decoding scheme
Background Active modules are connected regions in biological network which show significant changes in expression over particular conditions. The identification of such modules is important since it may reveal the regulatory and signaling mechanisms that associate with a given cellular response. Results In this paper, we propose a novel active module identification algorithm based on a memetic algorithm. We propose a novel encoding/decoding scheme to ensure the connectedness of the identified active modules. Based on the scheme, we also design and incorporate a local search operator into the memetic algorithm to improve its performance. Conclusion The effectiveness of proposed algorithm is validated on both small and large protein interaction networks.
Background
With the increased use of high-throughput experimental data such as gene expression profiles, protein-protein interactions and metabolic response [1], we are able to gain better understanding of the molecular mechanisms of biological functions. Because molecules interact with each other to exert biological functions, it is important to understand not only the activity of individual molecules, but also their interaction. In the past decade, network biology approaches which explicitly model the molecule interactions as graphs or complex networks have been intensively used. One of the primary tasks is to explore topological properties of biological networks, such as community structure [2] and network motifs [3]. Though the topology of a biological network does not always precisely reflects the function or even disease-determined *Correspondence<EMAIL_ADDRESS>1 School of Computer Science, University of Birmingham, B15 2TT Birmingham, UK Full list of author information is available at the end of the article regions [4], they may have some overlapped components, which then can be related back to biological functions.
Active module identification is one of the most important network biology analysis algorithm, which is able to reveal the regulatory and signaling mechanisms of a given cellular response [5]. The algorithm aims to find an connected regions over certain biological networks that show significant changes under certain conditions. In the seminal work of [5], the authors first constructed proteinprotein interaction network where the nodes represent proteins, and edges represent the physical interactions between a pair of proteins. Node scores which indicate the significance of expression changes over certain conditions were calculated from the gene expression data and then assigned to the nodes. The active module identification problem was formulated as a combination optimization problem, which aims to search a subnetwork that maximize the aggregated score.
This combinatorial optimization problem turns out to be NP-hard [5], which is equivalent to finding a maximum weight clique in a weighted graph, a famous NP-complete problem [6]. As effective tools to solve combinatorial problems, metaheuristic algorithms have been widely applied to search satisfied solutions [7,8]. The original paper [5] proposed to use simulated annealing (SA), a generic probabilistic metaheuristic to solve this problem. Other methods include extended simulated annealing [9], greedy algorithm [10,11], graph-based heuristic algorithm [12] and genetic algorithm (GA) [13,14]. A comprehensive review of this filed can be found in [15].
Binary encoding is the most common solution representation for active module identification using metaheuristic optimization algorithms such as SA or GA. In this encoding, the module in n-nodes network can be represented by membership vector x ∈ {0, 1} n , where x i = 1 means i-node belongs to the module. One of the prerequisites to use this representation is to ensure the connectedness of the solution, which is not only a biological requirement for resulting subgraphs (connected subgraph means reachable interactions inside the module). Without the connectedness constraint, the maximal objective may corresponds to a set of unrelated top-ranked nodes. Unfortunately most related works mentioned above either did not consider this non-trivial constraints, or did not tackle this aspect efficiently.
Another problem of using generic metaheuristic optimization algorithms is that the search operators, i.e., perturbation [5], mutation and crossover [14], are not specifically designed for active module identification, which might result in mediocre search performance in terms of speed and accuracy. In our previous works, we have shown that by incorporating local search operators into generic metaheuristic optimization algorithms, we can significantly improve the speed and accuracy for community detection in large scale biological networks [16,17].
In this paper, in order to address the connectedness problem, we first propose an effective encoding/decoding scheme. Based on the representation, we propose a local search operator and then embed it into a memetic framework. We have evaluated the proposed method for both simulated and real-world data, which shows the superior performance over other algorithms.
Active module identification
Commonly the an interaction network is represented as an undirected graph G = (V , E), nodes in V represent genes, and edges in E represent the interactions between two genes. We can assign each gene i a p-value p i to indicate the significance of expression changes over certain conditions. Then we can obtain a z-score z i = −1 (1 − p i ) for each gene, where −1 is the inverse of normal CDF.
To find a subnetwork which has high nodes scores, the aggregation z-score of subnetwork A z A is defined as [5]: where k is the number of genes in A. In order to get subnetwork which has higher aggregation z-score compared with a random set of genes, it is suggested to use a corrected subnet score s A [5]: where the mean μ k and standard deviation σ k are computed based on a Monte Carlo approach, taking several rounds of randomly sampling k genes from the network. The simplified problem of finding highest score module in an undirected network, which consider the subnetwork score is the sum of each node's score, is formally defined as following: In order to solve Problem 1, which is a NP-hard combinatorial optimization problem, meta-heuristics algorithms have been applied. For example, simulated annealing was used in [5]. In each iteration, if toggling the state of a randomly picked node can increase s A of expected subnetworks, then one choose to toggle it; otherwise to toggle it with certain probability. After a number of iterations, a set of high score subnetworks can be obtained. In [14], based on binary encoding scheme, Genetic Algorithm with genetic operators such as mutation and crossover has been proposed to search for active modules.
New binary encoding/decoding scheme for active module identification
Despite the biological insightful results obtained from the algorithms mentioned above, one important detail was omitted in the papers: how to ensure the connectedness of the resulting subgraph after applying heuristic operators such as toggling, mutation or crossover. This detail is important because without ensuring the connectedness of a candidate solution, the identification of active modules could be trivial, i.e., a set of isolated top-ranked nodes.
In the source code provided by the original authors (jAc-tiveModules, a plug-in for Cytoscape [18]), the authors employed a sophisticated way to check whether toggling one node of a membership vector is feasible, i.e., whether the toggling will affect the connectedness of the candidate solution, which makes the whole algorithm slow.
Specifically, given a candidate solution, i.e., a subset of nodes, an additional HashMap has to be maintained to stores the pairwise elements {node, comp}, which indicates each node and its component (connected subnetwork), respectively, during the whole progress. After toggling, the algorithm will check this HashMap to see whether the operator affects the connectedness of resulted subnetworks. Such operations leads to both running time and memory overhead.
In this paper, we propose a simple but fast binary encoding/decoding scheme, which does not require the HashMap nor explicit operations when add or remove current nodes. Our binary encoding scheme is the same as used in [14], i.e., a binary vector of n binary values of which each represents the membership of the node (x i = 1 means i-node belongs to the module). The key difference is the decoding scheme. Wile the previous work [14] did not consider the connectedness constraint. Specifically, we conduct the connected components finding (CCF) algorithm on the binary vector presented subset, and then extract the connected subnetworks. Decoding scheme based on CCF algorithm as described in Algorithm 1, where Breadth-first search (BFS) is used to recursively to find the node's neighbors. Since there are multiple connected subgraphs in a candidate solution, the fitness calculation can be flexible. In the simplest case, we can use the subgraph with the highest aggregated node score. However, no matter how we calculate the fitness function, genetic meta-heuristics algorithms can be directly applied based on the encoding/decoding scheme. For example, if we use SA, in each iteration, we decide to add or remove a randomly picked node by the same criterion: if toggling the state of the selected node c can increase s A of the subnetwork A with the highest aggregated node score, then we choose to toggle it; otherwise to toggle it with certain probability p. Compared with original mechanism of jActiveModules in Cytoscape, this decoding is computational tractable and easy to implement.
The connected components finding Algorithm 1 is actually based on breadth-first search (BFS) on a (sub)graph, requiring time complexity O(|V | + |E |) where |V | and |E | are the number of nodes and edges of the current set respectively. Notice that this time complexity is only equivalent to one case to toggle a node in jActiveModules in theory.
Memetic algorithm
Evolutionary algorithm (EA) is a powerful global optimization to solve combinatorial optimization problems. Inspired by biological evolution, a typical EA uses operators such as selection, crossover and mutation to improve the candidate solutions [19]. Parameters for an EA are number of iterations T, population size P, crossover probability p c and mutation probability p m .
Memetic algorithm (MA) improved standard EA by enabling individuals to perform local refinements [20]. Numerous effective local search (LS) methods have been developed and incorporated into MA to obtain stateof-the-art results in various applications [21][22][23]. A recent review of MA can be found in [24]. Algorithm 2 describes a common framework of MA, where the standard mutation operation is replaced by a local search operator. Being similar to conventional GA algorithms which partially prevent the "local optimum" problem by mutation and crossover mechanisms, Algorithm 2 uses an enhanced mutation step. With enough number of evolutionary generations, this algorithm is supposed to convergence. According our encoding/decoding scheme, each candidate solution consists of several connected subgraphs, we define the highest score of these subgraphs as the fitness of x, denoted by F(x). For multiple modules identification, we use a module extraction mechanism, i.e. to identify one active module each time and then extract it from the background network, which is left for next round.
For the local search part, here we mainly consider a simple greedy search strategy. We pick all individuals in the population with probability p LS and conduct M times of toggle on current individual where M < N. Finally we replace each chosen individual with the best scored one, followed by other genetic operators. More operations as in [22] to conduct local search could be applied here.
It is necessary to make sure the identified module has reasonable size when toggling nodes. Both extreme small and large module can make the interpretation difficult. But the objective (2) itself cannot prevent large modules. Neither original work [5] nor GA based method [14] proposed mechanisms to achieve reasonable sized modules. Furthermore, to maximize objective (2) may lead to single gene module or very large component in practice. As long as one large module (e.g. containing 1,000 genes) is connected and has high aggregated score, then this module may be found using general algorithm 2.
Here we make a simple modification to the mutation operator in GA and local search operator in MA to constrain the module size to be desired: as long as the number of candidate genes (number of '1's in encoding vector) exceeds some threshold N max , there will be no more potential nodes added to the subset. On the contrary, if the module size is going to be smaller than predefined threshold N min , there will be no more potential nodes removed out from the current subset.
The procedure of local search is described as in Algorithm 3. The whole procedure of MA for active module identification is combining general memetic framework 2 and the local search strategy. For evolutionary operations in the whole procedure, we chose the commonly used one-point crossover.
The computational complexity for memetic Algorithm 2 is O(TP) without local refinements. The expected computational complexity of whole algorithm with greedy search is thus O(TP + TM(|V | + |E |)) where |V | and |E | are the number of nodes and edges of a candidate solution subgraph respectively. If we consider almost half of the whole nodes may get involved in evolution and normally the number of edges |E | in subgraph approximately at the same level of the number of nodes |V |, the simplified complexity of the whole algorithm should be O (TP + TMN). Generally the size of population P is small compared with the network size N, which makes the latter dominate the running time. And the number of local search trails M in each inner iteration also has an impact on the efficiency. In theory the sophisticated mechanism of jActiveModule can also be used here, but it would makes the fitness evaluation more difficult. And the space requirement is higher due to the HashMap.
Algorithm 3: Greedy search for MA on active module identification Procedure of local search ; for each individual in population do
Select current individual x with probability p LS ; x best = x; for i = 1→ M do Generate individual x by toggle a random position j on x best though the following procedure; if x best j == 1 and x best > N min then x = x best by x best j = 0; end else if x best j == 0 and x best < N max then x = x best by x best j = 1; end Conducting Algorithm 1 on x and calculating the module score F(x ); if F(x ) > F(x best ) then x best = x ; end end end
Module connectedness validation
First of all, we validate if the modules identified by proposed algorithm are connected. The baseline algorithm is a simple GA with basic binary encoding scheme without connectedness guarantee to search highly scored module in molecular networks. We use a simulated interaction network with 500 nodes and 1000 edges, to just validate the connectedness property. Figure 1 showed the resulted module, and the red nodes are in subset of resulted module and gray ones are their neighbors but not included in the subset. We can see that the original subset is not connected at nodes like 185, 400 and 163 etc, which are isolated from large set of red nodes. If we use the same GA algorithm with the proposed encoding mechanism in section 3, we can get a different result as Fig. 2 shows. With the same input and algorithmic parameters, the red nodes are now connected in the identified active module. The standard GA (modified from COSINE [14]) and visualization code are available at https://github.com/ fairmiracle/EAModules.
Yeast PPI network
We first validate the proposed algorithm on a small real protein-protein network with 329 proteins in Yeast [25]. The p-values on each nodes show the significance of gene expression changes in response to a single perturbation: a strain with a complete deletion of the GAL80 gene versus We compare the performance of three algorithms using the encoding method in section 3: simulated annealing (SA), genetic algorithm (GA) and the proposed memetic algorithm (MA). In order to compare SA with other two Fig. 2 Modules identified by modified GA with proposed encoding scheme on the same simulated data as in Fig. 1. The red nodes are connected EAs fairly, we run SA P (also the population size in GA and MA) times and select the best result, since SA is viewed as a single population GA. The number of iterations T for all algorithm is 10000, and temperatures decrease from 1.0 to 0.01 for SA. Other evolutionary parameters are crossover rate p c = 0.9 for GA and MA, mutation rate p m = 0.9 for GA and local search iterations M = 10 for MA. In GA and MA we also reserve the best individual in each iteration for stability. We run each algorithm 50 times with randomly initialization and then compare the performance w.r.t highest module score and corresponding module size. Figure 3 summarizes the results in terms of module score based on 50 trails. We can see MA can achieve slightly higher mean score than GA, and both are better than SA. One-Way Analysis of Variance (ANOVA) is used to determine differences between results from three algorithms, with p − value < 2.2e −16 . And a paired sample t-test is used to tell the difference between GA and MA, with p − value < 1.19e −5 .
Besides the quality of module, we also compare the rate of convergence of three algorithms, to see how objective improves along with iterations. We define the best objective value in population as the indicator in each iteration. According to Fig. 4, MA reaches the stable objective earlier than GA. The local search scheme could make sure the performance of MA is no worse than basic GA, and the monotonic selection leads to early convergence compared with GA, at the cost of longer running time of local search. Both GA and MA get higher objective than SA, which needs much more iterations to reach high score.
Human PPI network
In order to check the biological relevance of identified modules by proposed algorithm, we apply it on the real world protein-protein interactions (PPI) network. The background PPI network for homo sapiens is obtained from two updated databases: BioGRID [26] Release 3.4.138 and STRING v10.0 [27], specifically 9606.protein.links.v10.txt. The BioGRID for homo sapiens has 362,775 interactions while STRING stores 8,548,002 protein pairs, with a combined score ranging from 150 to 999 for each link. The gene expression profile comes from GEO35103 controlled by the differentiation of Th17 cell, which is considered to play a key role in pathogenesis of autoimmune and inflammatory diseases [28]. The expression profile contains 48,000 probes (genes), and 28,870 were kept after the following process: 1) remove probes those do not have gene symbols; 2) remove probes with more than 20% of missing values or NAs; 3) replace the rest missing data with mean value of the row they belong to. Further we select 5003 significantly expressed genes from all of them using limma [29]. The gene filtering algorithm selects some potentially important candidates and reduce network size. Finally we select PPI pairs according to match of expression probes. For BioGRID we simply match the gene names for each probe of expression profile. But STRING uses the protein name (start with ENSP), thus we need to match that with official symbols (like ARF5) with database Ensembl Genes 84 [30], and select the corresponded genes. The source code for genes selection and construction procedure of PPI network from multiple data sources is available at https://github.com/fairmiracle/PPINet. The network constructed from BioGRID has 2327 nodes and STRING has 1602 nodes, with 1480 nodes in common. We conduct the algorithm 2 on both networks, and use a module extraction method to identify multiple modules from this network, i.e. to identify one active module each time and then extract it from the background network, which is left for next round. The largest size of each module is 100. The full gene symbols lists of modules are provided in supplementary materials (at https://github.com/fairmiracle/ EAModules/tree/master/examples/Supplementary, where "GSE35103FromString_MA.txt" means the modules identified from STRING based PPI network using MA algorithm, and each module is stored as plain text by module score, gene ids and official gene symbols). We can also see that under the same condition, MA could achieve higher scored modules than GA.
In order to validate the identified modules, we follow the gene set enrichment analysis [31] and use various updated tools, including basic gene ontology (GO) database (http://geneontology.org) and Analysis in STRING, integrative and interactive web-based tools like GeneMANIA (http://genemania.org) [32]. The basic idea of annotating a given gene list is to compare it with Generally speaking, larger module tends to be enriched multiple biological functions, which may not be very relevant to each other. The first module identified from STRING PPI network contains 76 genes and according to GeneMANIA [32], among all potential links inside the module, there are 51.63% co-expression links, 33.59% are physical interactions and 4.16% are pathways. The top biological processes and pathways related to this module are listed in Table 1. We can see several general responses found by STRING, and the hub nodes in this module shown as in Fig. 5 also indicate general important genes related to receptor signaling and signal transduction (also see http://bit.ly/2a87HTB). While functions given by GeneMANIA show that these functions are intensively involved in Th17 cell differentiation. Several items are also claimed in a recent publication [33], which is consistent with the experimental settings.
The smaller module tends to play more specific roles in the process. Figure 6 plotted by GeneMANIA [32] shows the interactions between these 17 genes, and 87.84% of them are co-expression links according to previous studies. The function is more about pathways, like Fc-epsilon receptor signaling and Fc receptor signaling. Related genes contained in this module are MAP3K1, MAP3K5 and MAP3K6, mitogen-activated protein kinase kinase, which play central roles in the regulation of cell survival and differentiation. The connection between MAP3k and Th17 differentiation is supported by [34], through encoding MEKK1 which controls both B and T cell proliferation. And MEKK1 regulates Cdkn1b expression in Th17 cells. Other processes enriched by the module are also mentioned in a recent study [35].
Different sources of protein-protein interactions also make an impact. From the comparison between modules between BioGRID and STRING networks, we can see that they share some functions such as Fc-epsilon receptor signaling pathway, but they are not totally the same. Interactions in BioGRID are largely rely on highthroughput datasets and previous studies, which makes the identified module less focused to some functions. Irreverent supporting materials make the set of genes has lower coverage and higher FDR, given by functional enrichment report by GeneMANIA. In contrast, STRING has many experimental and predicted interactions [27], and the combined score of links can further help to pick more reliable edges of PPI network. Identified modules from this network tend to have more significant biological meanings. Take the first module (http://bit.ly/2asI0Nw) for example, gene ontology tells the hierarchical biological process of this module by starting with regulation of tyrosine phosphorylation of Stat3 protein. The Stat3 has been shown to be a master regulator of Th17 cell differentiation [36] and related immune pathways.
Conclusion
Searching for connected subnetworks in biological networks is essentially a combinatorial optimization problem, which can be solved by various metaheuristic methods. We design a direct strategy on a set of node to get connected subnetworks, thus avoid complicated graph divide operations. And the binary encoding can be used in general heuristic optimization algorithms like simulated annealing and genetic algorithm. And the GA is further improved by a memetic algorithmic framework embedded with local search operators. Empirical studies on real networks shows the effectiveness and efficiency of this strategy.
Future works can be considered in two different aspects. From the network model, how to derived effective algorithmic model to deal with directed and weighted network is of interests. The PPI network itself is weighted and confidence score of interactions may affect results. And the direction of some edges has biological meanings as well. From the evolutionary algorithm view, the method used in this paper is rather superficial and various state-of-the-art techniques have not been employed. Further improvements on EA may make it more efficient in handling large-scale networks. | 5,622.4 | 2017-03-01T00:00:00.000 | [
"Computer Science"
] |
Improving power and accuracy of genome-wide association studies via a multi-locus mixed linear model methodology
Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS.
Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits, especially with the development of advanced genomic sequencing technologies. The mixed linear model (MLM) method 1,2 by fitting a population structure (Q) and a polygene (K), the so called Q + K model, is the most popular method used for GWAS. After the MLM of Yu et al. 2 was published, many advanced MLM-based methods have been proposed [3][4][5][6][7] , primarily to improve computational efficiency. A common feature of the MLM based GWAS is the one-dimentional genome scan by testing one marker at a time. The major advantage of such a genome scanning approach is the ability to handle a large number of markers, e.g., more than a million markers. However, such a model does not facilitate good estimates of marker effects because the model is never correct if a trait is indeed controlled by multiple loci, which is the case for most complex traits. Another problem with the method is the issue of multiple test correction for the threshold value of significance test. The typical Bonferroni correction is often too conservative, so that many important loci may not pass the stringent criterion of significance test.
Most complex traits are controlled by several major loci plus numerious undetectable loci with small effects (collectively called polygenes). The one-dimentional scanning GWAS will never recover the true model due to the intrinsic limitation of the model. Multi-locus models are better alternative methods for GWAS; these include Bayesian LASSO 8 , penalized Logistic regression 9,10 , Elastic-Net 11 , and empirical Bayes 12 methods. An obvious advantage of these methods is that no Bonferroni correction is required because of the multi-locus nature. Although these methods are shrinkage approaches and are supposed to be able to handle the number of markers several times larger than the sample sizes, they will fail when the number of markers is many times larger than the sample size, either due to constraint in computational time or limit in memory allocation. These models will also face the multicollinearity issue when the marker density is extremely high. Recently, Segura et al. 13 has proposed a multi-locus MLM approach. However, further refinement is needed.
If the number of markers is small or moderately large and can be handled by one of the multi-locus approaches, we should consider a multi-locus method for GWAS, otherwise, a combination of the single locus genome scanning and multiple locus analysis may be considered. In the first stage, markers are scanned and selected with a low criterion of significance test. In the second stage, a multiple locus method is implemented for markers that have passed the initial screening. Statistical tests and marker effect estimation are then based on the multiple locus model. The MLM method of GWAS in the initial scanning stage treats marker effects as fixed. Goddard et al. 14 proposed to treat marker effects as random, following a normal distribution with zero mean and an unknown variance. They described several advantages of the random model approach over the fixted model treatment. One advantage is that the random model approach will shrink the estimated (better called predicted) marker effects towards zero, leading to a maximum correlation between observed and predicted phenotypic values. However, Goddard et al. 14 did not provide an efficient computational algorithm to estimate (or predict) marker effects.
In this study, we developed an efficient algorithm to estimate variances of the markers and predict effects of these markers. This method is called the random-SNP-effect mixed linear model (RMLM). The result of RMLM can either be treated as the final result of GWAS or used to select markers for the second stage analysis. In the second stage of GWAS, the selected markers are simultaneously evaluated in a single model using an EM empirical Bayes approach 15 . Estimation of marker effects and significance tests of these markers are performed in the second stage. This method is called the multi-locus random-SNP-effect mixed linear model (MRMLM). We demonstrate that this two-stage combined method of GWAS has significantly increased the statistical power and decreased Type 1 error compared with other methods, including the efficient mixed model analysis (EMMA).
Results
Statistical power for quantitative trait nucleotide (QTN) detection. To confirm the effectiveness of the MRMLM and RMLM methods, a series of Monte Carlo simulation experiments were carried out. Each sample was analyzed by the two new methods (MRMLM and RMLM) along the EMMA method. The significance threshold p value for the MRMLM method was 0.0002 (see Methods for calculation of this threshold). The corresponding threshold p value for the RMLM method was . /m 0 05 e (modified Bonferroni correction for multiple tests), where m e is the effective number of markers (see Methods for calculation of m e ). The threshold p value for the EMMA method was . /m 0 05 (Bonferroni correction for multiple tests), where m is the total number of markers. For each QTN, power was defined as the proportion of samples where the QTN was detected (p value smaller than the designated threshold). In the first simulation experiment where no polygenic variance was simulated, the MRMLM method has the highest power for all six QTNs simulated, followed by the RMLM method and the EMMA method ( Fig. 1a and Table S1). On one occasion (QTN number 5), the RMLM method is slightly more powerful than the MRMLM method. In the second simulation experiment when an additive polygenic variance (φ = 2 2 and = . h 0 092 2 ) was added to the polygenic background, the same trend in power was observed where MRMLM is more powerful than RMLM and EMMA is the least powerful ( Fig. 1b and Table S2). On one occasion (QTN number 4), the three methods have very similar power, with RMLM being slightly more powerful than the other two methods. In the third simulation experiement where three pairs of epistatic effects (collectively contributing 0.15 of the phenotypic variance) was added to the genetic background, again, MRMLM is the most powerful followed by RMLM and EMMA ( Fig. 1c and Table S3) with an exception for QTN number 5 where RMLM is slightly more powerful than MRMLM. The sample sizes of the above three simulation experiments were all = n 199. We also changed the sample size from 199 to 149 and 99 under the fourth simulation experiment with the MRMLM method. The statistical powers are demonstrated in Fig. 1(d). As expected, the statistical power has declined as we reduced the sample size (Table S4). Similar trend of power changes were also observed for different numbers of markers (Table S5).
Accuracies of estimated QTN effects.
We used the mean squared error (MSE) to measure the accuracy of an estimated QTN effect for a particular method. We evaluated the accuracies for all the six simulated QTNs from all three methods. The MSE's are demonstrated in Fig. 2, where panels (a), (b) and (c) represent the results from the three simulation experiments, respectively. The MRMLM method is consistently more accurate than the RMLM method, which in turn is more accurate than the EMMA method (see Tables S1-S3). Figure 2(d) shows the results of different sample sizes by the MRMLM method from the fourth simulation experiment, showing that, as expected, a large sample size is associated with a small MSE (Table S4). Type 1 error and ROC curve. The empirical Type 1 error rates of the three methods from the three simulation experiments are illustrated in Fig. 3. Overall, the three methods have similar Type 1 errors except the first simulation experiment where EMMA has an very large Type 1 error compared with the two new methods. In the second and third simulation experiments, EMMA has the least Type 1 errors followed by the MRMLM and RMLM methods. Fig. 3(d) shows the empirical Type 1 errors of the MRMLM method from the fourth simulation experiment with three different sample sizes (199, 149 and 99), where Type 1 error has been increased with a decreased sample size.
A useful way to compare different methods for their efficiencies in the detection of significant effects is the receiver operating characteristic (ROC) curve comparison. An ROC is a plot of the statistical power against the controlled Type 1 error. The higher the curve, the better the method. When sixty-one probability levels for significance, between 1E-8 to 1E-2, were inserted, the corresponding powers were calculated in the first simulation experiment. Figure 4 shows the comparison of the ROC curves from the three methods for each of the six QTNs Scientific RepoRts | 6:19444 | DOI: 10.1038/srep19444 simulated from the first simulation experiment. Clearly, the MRMLM method stands out way above the other two methods while the RMLM is better than the EMMA method when the Type 1 error is relaxed.
Computational efficiency. When performing GWAS on the simulated data, we first scanned the genome by the single-locus RMLM method to find the association between each SNP and the trait of interest. This process took 12.78 hours (Intel Core i5 CPU 4570, 3.20 GHz, Memory 8.00G, 1000 datasets) in the first simulation experiment. The MRMLM took an additional 0.51 hours to conduct the multi-locus analysis. Although the MRMLM method requires more computing time, the high power and small MSE relative to the RMLM method are good justifications for the improved method. The EMMA method took 68.77 hours for completing the analysis for the first simulation experiment.
Real data analyses in Arabidopsis.
We analyzed six flowering time related traits of the Arabidopsis thaliana population published by Atwell et al. 16 using all the three methods (MRMLM, RMLM and the EMMA). The numbers of SNPs significantly associated with the six traits are 29, 15, 27, 13, 22 and 14, respectively, for traits LD (days to flowering under long days), LDV (days to flowering under long days with vernalization), SD (days to flowering under short days), 0 W (days to flowering under long days with no vernalization), 2 W (days to flowering under long days for two week vernalization) and 4 W (days to flowering under long days for four week vernalization), from the MRMLM method. The corresponding numbers of associated SNPs are 8, 5, 3, 6, 6 and 7 from the RMLM method. The EMMA method only detected 1, 3, 1, 0, 1 and 2 SNPs for the above six traits (see Table S6 for details of the associated SNPs). These significantly associated SNPs for each trait were used to conduct a multiple linear regression analysis and the corresponding Bayesian information criteria (BIC) were calculated. The MRMLM method shows the lowest BIC values for all traits ( Table 1), indicating that SNPs detected by the MRMLM method fit the data better than the other methods.
We found that 6, 4, 6, 2, 3 and 5 genes previously reported to be associated with the six traits are in the proximity of the SNPs detected by the MRMLM method. The corresponding numbers of genes in the vicinity of the SNPs detected by the RMLM method are 3, 3, 2, 1, 1 and 2, respectively, for the six traits. Only 2, 2, 1, 0, 0 and 1 genes are in the neighborhood of the SNPs detected by the EMMA method (see Table 2 and Table S7 for details of the genes). Clearly, the MRMLM method detected more known genes than the other two methods, indicating that this multi-locus model (MRMLM) has a higher power for QTN detection than the single-locus model (RMLM) and the EMMA method.
Discussion
To reduce computing time required for GWAS, Zhang et al. 6 proposed a P3D algorithm that fixes the polygenic-to-residual variance ratio in the genome-wide scanning step. Kang et al. 3 used a matrix transformation prior to the genome-wide scanning stage and treated the scanned SNP effect as fixed. If we view the SNP effect as random, one additional variance of the QTN effect needs to be estimated, and the complexity and computing time in parameter estimation has been increased, as shown with the MLM-based approaches of Zhang et al. 1 and Yu et al. 2 . In the present study, a new matrix transformation is constructed, the P3D algorithm is adopted, and the residual variance is estimated after the variance of the QTN effect is estimated. Therefore, only one parameter, the ratio of the QTN effect variance to the residual variance, is estimated in the genome-wide scanning stage. In doing so, the MRMLM method requires only 20% of the computing time needed by the EMMA method. More importantly, the new method performs better than EMMA in terms of high statistical power, low Type 1 error and low MSE of an estimated QTN effect.
The current GWAS method is a single-locus analysis approach under polygenic background and population structure controls. The number of tests involved is the number of markers, requiring a Bonferroni correction for multiple tests. To control the experimental error at a genome-wide level of 0.05, the significance level for each test should be adjusted as . /m 0 05 , which is 5E-8 if one million markers are to be scanned. In the multi-locus model, however, there is no need for such a multiple test correction due to the multi-locus and shrinkage natures of the new method. This conclusion is also supported by the results of Monte Carlo simulation studies. We compared the result of EMMA in this study with the result reported in Atwell et al. 16 ; fewer known genes are listed in Table 2, because some genes identified in previous studies are not significantly associated with the traits after the Bonferroni correction. If the significance level was changed to a less stringent criterion, more known genes would have been found (Table S7). We investigated the effect of the critical value on the selection of putative QTNs. Similar results were observed for the three critical values selected (0.001, 0.01 and 0.05), although the 0.01 value resulted in the marginally best performance in terms of statistical power of QTN detection and accuracy of QTN effect estimation (Table S8).
There are several multi-locus GWAS approaches already published in the literature 5,13,17 . When the number of markers is not large, all marker effects and their interactions can be included in a single model, such as the empirical Bayes method 12 . If the number of markers is large, this single model approach is not feasible. One question is how to reduce the number of parameters in a full genetic model. Zhou et al. 5 developed a Bayesian sparse linear mixed model and Moser et al. 17 proposed a Bayesian mixture model. Under these models, two to four common components in the mixture distribution were considered and only a few variance components were estimated. Although about 500 effects in the genetic model are finally considered after several rounds of Gibbs sampling, the computing time becomes a major concern for these Bayesian approaches. Therefore, the ideal method is to delete spurious QTN effects prior to implementing the multi-locus model. The first step of MRMLM is RMLM, which deletes the majority of the markers in advance so that only a small set of markers are left to the second stage for evaluation. The MRMLM method differs from the multi-locus method of Segura et al. 13 in several aspects. First, the SNP effects are viewed as random in the MRMLM method while they are treated as fixed effects in Segura et al. 13 . Secondly, we adopted a simple matrix transformation technique to improve the computational efficiency while Segura et al. 13 implemented an algorithm involving three complicated treatments. Finally, the MRMLM method uses one set of selected SNPs, which have p values less than 0.01 in the initial scanning while Segura et al. 13 requires MCMC samplings.
Atwell et al. 16 listed 500 most significantly associated SNPs, although some of them were not significant at the . /m 0 05 criterion. In the neighborhood of these SNPs, some genes were found to be related to the traits of interest (Table 2 and S7). In this study, 21 genes for six flowering time traits are found to be in the vicinity of the detected SNPs, consistent with results previously reported, as shown in the database (http://www.arabidopsis.org/), the work of Atwell et al. 16 and related references [18][19][20][21][22][23][24] (Table S7). Therefore, the Arabidopsis thaliana GWAS results of his study appear to be reliable.
In the study of GWAS methodology, real genotypes in natural population are frequently used to conduct Monte Carlo simulation studies 1,2,6 . In this study, therefore, the real SNP dataset in Atwell et al. 16 was adopted in the simulation studies. To further confirm the new methods, 200 samples with simulated genotypes derived from the minPtest R package 25 were analyzed. As a result, similar results were found (Table S9).
Conclusion
The RMLM simply treats the SNP effect as random, and includes new matrix transformation, fixing the polygenic-to-residual variance ratio and estimating residual variance after the variance of QTN effect is obtained. Meanwhile, it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM, and all the effects in the model are estimated by an EM empirical Bayes method. Results from real data analyses and simulation studies show that the MRMLM has the highest power for QTN detection, the best fit for genetic model, the minimal bias in the estimation of the QTN effect, and the strongest robustness, as compared with the RMLM and the EMMA.
Random
where X is an incident matrix for fixed (non genetic) effects, α is a vector of the fixed effects, Z k is a vector of genotype indicators for the kth SNP that are coded as 1, 0 and −1, for one of the two homozygotes, the heterozygote and the other homozygote, respectively, γ k is the effect of marker k with an assumed normal distribution of mean zero and variance φ k 2 is a vector of polygenic effects with a multivariate normal distribution of mean zero and variance φ 2 described by a covariance structure K, ε is a vector of residual errors with a σ ( , ) N 0 I 2 distribution and σ 2 is the residual variance. The expectation of y is = α ( ) y X E and the variance is where λ φ σ = / In the single-locus RMLM, the polygenic variance ratio λ is only estimated once under a pure polygenic model (the null model) prior to the marker scanning stage. The estimated variance ratio, λ , is then treated as a constant when markers are scanned. This approach has been called GWAS with population parameters previously defined (P3D) 6 . The original P3D was implemented when γ k was treated as a fixed effect. In this study, γ k is treated as a random effect, which presents a great challenge in computation. However, we adopted a new algorithm to ease the computation, as described in the following paragraph. Let us perform eigen decomposition for K so that = is a diagonal matrix for the eigenvalues and U is an n × n matrix for the eigenvectors.
be transformed variables so that be the general covariance structure. After absorbing α and σ 2 , we have the following profiled restricted log likelihood function, where q is the rank of matrix X and This likelihood function contains only one unknown parameter λ k . The Newton algorithm for λ k is Once the iteration process converges, the solution is a single-locus RMLM estimate of λ k , denoted by λˆk. Note that the likelihood function involves R k and − R k 1 , which are very expensive to compute. However, the special structure of R k allows us to implement the Woodbury matrix identities 27 for calculating R k and − R k 1 . As a result, the random model approach does not present a substantial increase in computational time.
Given λ λ = k k , the estimates of α and σ 2 are The best linear unbiased prediction (BLUP) of γ k is also the conditional expectation of γ k given ⁎ y and has the following expression, The conditional variance is Under the single-locus RMLM approach, we first estimate λ and then fix it at λ to estimate λ k and scan all markers by testing λ k = 0 for each SNP. The null hypothesis test for γ = H : 0 The p-value of this Wald test is calculated using 2 is a Chi-square variable with one degree of freedom. Because the estimated marker effects under the random model are shrunken towards zero, we are able to use a modified Bonferroni correction to find the threshold p value for genome-wide significance tests 28 . This modified Bonferroni correction uses an effective number of markers to adjust for multiple tests so that the threshold p value
Multi-locus random effect mixed linear model (MRMLM).
The single marker RMLM method described above can also be considered as an intial screening step for a new multi-locus random effect mixed linear model (MRMLM) that is described here. We use a less stringent criterion for the initial stage screening from RMLM for all markers that have p values smaller than 0.01. In addition, consecutive markers passing the 0.01 threshold around an already selected marker (± 20 kb for real data analysis and ± 1 kb for simulated data analysis) are eliminated to reduce collinearity among selected markers. Only these selected markers are included in the muti-locus model for further evaluation, including estimation of marker effects and significane tests. Due to the shrinkage nature, the majority of markers will be eliminated in the intitial screening. Therefore, the number of markers left in the second stage analysis is often a small subset of all markers, say a few hundred or a few thousand at most. Among the remaining markers, all those that passed the modified Bonferroni correction are used to conduct a likelihood ratio test (LRT), and the others are treated as random. If the LOD score for one marker in the LRT is more than 1.50, this marker is treated as fixed, or it is viewed as random. This small number of surviving markers are then included in a single multi-locus model. We propose to use the EM empirical Bayes (EMEB) method 15 because this method also provides a significance test for each marker (likelihood ratio test), while the LASSO method does not have a default method to perform such a test. The EMEB method is also a random model approach because each random marker effect is assigned an empirical distribution with a variance. Because the model is multi-locus in nature, there is no requirement for Bonferroni correction. Therefore, the original 0.05 threshold may be used for significance test. Considering that all markers are selected in the first stage, we decided to place a slightly more stringent criterion of 0.0002, which is converted from LOD score 3.0 of the test statistics using χ = ( > × . ) = .
Efficient mixed model analysis (EMMA). This is an existing method for GWAS 3 and used as a gold standard for comparison. This method is the fixed model version of the original MLM, in which γ k was treated as a fixed effect with no distribution assigned. The method was implemented in the R software package EMMA (http://mouse.cs.ucla.edu/emma/). The threshold of p value was set as . /m 0 05 after Bonferroni correction for multiple tests, where m is the number of markers.
Simulation experiments. In the first four simulation experiments, all the SNP genotypes were derived from the 216130 SNPs in Atwell et al. 16 . All the SNPs between 11226256 and 12038776 bp on Chr. 1, between 5045828 and 6412875 bp on Chr. 2, between 1916588 and 3196442 bp on Chr. 3, between 2232796 and 3143893 bp on Chr. 4, and between 19999868 and 21039406 bp on Chr. 5 were used to conduct simulation studies. The sample size was the number of individuals in Atwell et al. 16 , namely 199. In the first simulation experiment, six QTNs were simulated and placed on the SNPs with allelic frequencies of 0.30; their heritabilities were set as 0.10, 0.05, 0.05, 0.15, 0.05 and 0.05, respectively; and their positions and effects are listed on Table S1. The average was set at 10.0; and residual variance was set at 10.0. Empirical statistical power for each QTN was calculated as the proportion of samples in which the p value is smaller than the designated threshold p value. A QTN detected within 1 kb of the simulated QTN was considered a true QTN. Empirical Type 1 error for each method was defined as the proportion of significant markers (excluding the markers overlapping with the six QTNs) over all markers with zero effects. In addition to power and Type 1 error, we also evaluated the mean square error (MSE) for each of the six simulated QTNs. For the ith QTN for = , ..., where γ ij is the estimated effect of QTN i from the jth sample and γ i in the true effect of QTN i. A method with a small MSE is generally more preferable than a method with a large MSE.
To investigate the effect of the polygenic (small effect genes) background on the MRMLM and RMLM methods, the polygenic effect was simulated by multivariate normal distribution σ ( , ) N K 0 pg 2 , where σ pg 2 is the polygenic variance, and K is the kinship coefficient matrix between a pair of individuals. Here σ = 2 pg 2 , so = . h 0 092 pg 2 . The QTN size (h 2 ), average, residual variance, and other values were the same as those in the first simulation experiment.
To investigate the effect of epistatic background on the MRMLM and RMLM methods, three epistatic QTNs each with σ = . were simulated. The first one was placed between 3063784 bp on Chr. 4 and 5227063 bp on Chr. 2; the second one was placed between 5986135 bp on Chr. 2 and 2031781 bp on Chr. 3; and the third one was placed between 2668059 bp on Chr. 3 and 11824678 bp on Chr. 1. The QTN sizes (h 2 ), average, residual variance, and other values were also the same as those in the first simulation experiment.
The Arabidopsis thaliana data. We also analyzed the well-known Arabidopsis thaliana data published by Atwell et al. 16 . The data contain = n 199 accessions with = m 216130 genotyped SNPs. Six flowering time related quantitative traits were analyzed using all the three methods (MRMLM, RMLM and EMMA). The six traits are: LD, LDV, SD, 0 W, 2 W and 4 W. These data were downloaded from the following website: http://www.arabidopsis. usc.edu/. We developed our own software to implement the data analysis (see Software S1). | 6,603.8 | 2016-01-20T00:00:00.000 | [
"Computer Science"
] |
Probiotic Mixture of Lactobacillus plantarum Strains Improves Lipid Metabolism and Gut Microbiota Structure in High Fat Diet-Fed Mice
The global prevalence of obesity is rising year by year, which has become a public health problem worldwide. In recent years, animal studies and clinical studies have shown that some lactic acid bacteria possess an anti-obesity effect. In our previous study, mixed lactobacilli (Lactobacillus plantarum KLDS1.0344 and Lactobacillus plantarum KLDS1.0386) exhibited anti-obesity effects in vivo by significantly reducing body weight gain, Lee’s index and body fat rate; however, its underlying mechanisms of action remain unclear. Therefore, the present study aims to explore the possible mechanisms for the inhibitory effect of mixed lactobacilli on obesity. C57BL/6J mice were randomly divided into three groups including control group (Control), high fat diet group (HFD) and mixed lactobacilli group (MX), and fed daily for eight consecutive weeks. The results showed that mixed lactobacilli supplementation significantly improved blood lipid levels and liver function, and alleviated liver oxidative stress. Moreover, the mixed lactobacilli supplementation significantly inhibited lipid accumulation in the liver and regulated lipid metabolism in epididymal fat pads. Notably, the mixed lactobacilli treatment modulated the gut microbiota, resulting in a significant increase in acetic acid and butyric acid. Additionally, Spearman’s correlation analysis found that several specific genera were significantly correlated with obesity-related indicators. These results indicated that the mixed lactobacilli supplementation could manipulate the gut microbiota and its metabolites (acetic acid and butyric acid), resulting in reduced liver lipid accumulation and improved lipid metabolism of adipose tissue, which inhibited obesity.
INTRODUCTION
Obesity has become a worldwide epidemic disease and has a serious impact on the healthy development of the human body, which is a major hidden danger to public health (Risk and Factor Collaboration, 2016;Lu J. et al., 2016;Solas et al., 2017). The researchers collected data from 195 countries and found that the prevalence of obesity worldwide has increased dramatically since 1980 (Reuter and Mrowka, 2019). In 2015, approximately 603 million adults and 107 million children suffered from obesity, with an overall prevalence of 12% for adults and 5% for children GRAPHICAL ABSTRACT | The mixed lactobacilli (Lactobacillus plantarum KLDS1.0344 and Lactobacillus plantarum KLDS1.0386) prevented high fat diet-induced obesity via regulating gut microbiota and lipid metabolism of the adipose tissue and inhibiting liver lipid accumulation in mice. (GBD 2015Obesity Collaborators, 2017. Obesity is defined by the World Health Organization as an excessive accumulation of fat that may be harmful to health and is diagnosed when body mass index (BMI) ≥30 kg/m 2 (Prospective Studies and Collaboration, 2009). In general, the intake of a large amount of high-sugar and high-fat food, sedentary work style and less physical exercise cause the energy intake to be larger than the energy consumption of the body, which will lead to the accumulation of excessive triglyceride in liver, kidney week and adipose tissue, thus causing obesity (Hurt et al., 2010;Heymsfield and Wadden, 2017;Bluher, 2019). Obesity can increase the risk of various diseases such as cardiovascular disease (Oikonomou and Antoniades, 2019), type 2 diabetes (Dietz, 2017), osteoarthritis (Schott et al., 2018), Alzheimer's disease (Solas et al., 2017), anxiety (Ogrodnik et al., 2019), depression (Tyrrell et al., 2018), and certain cancers [for example breast cancer (Picon-Ruiz et al., 2017;Hao et al., 2018), colorectal cancer (Wunderlich et al., 2018), pancreatic cancer (Zaytouni et al., 2017), stomach cancer (Murphy et al., 2018) and liver cancer (Shin et al., 2013)].
At present, the medications approved by the US Food and Drug Administration (FDA) for long-term weight management mainly reduce energy or food intake by causing fat malabsorption, promoting satiety, delaying gastric emptying, decreasing appetite, or acting on the central nervous system pathway, but they all have certain side effects such as nausea, diarrhea, vomiting, dry mouth, constipation, dizziness and fecal incontinence (Davidson et al., 1999;Smith et al., 2010;Gadde et al., 2011;Apovian et al., 2013;Xavier et al., 2015). For some patients with obesity to a certain extent, gastric banding, Roux-en-Y gastric bypass and vertical-sleeve gastrectomy, three types of surgery for obesity, are common treatments (Neylan et al., 2016;Schauer et al., 2016). Although more effective than drug intervention, these procedures are more risky (American College of Cardiology/American Heart Association Task Force on Practice Guidelines Obesity Expert Panel, 2013Panel, , 2014 and may cause some complications (Almalki et al., 2017). Therefore, safer and healthful nondrug therapies have been proposed, including the use of probiotics. Lactobacillus rhamnosus Lb102 and Bifidobacterium animalis ssp. lactis Bf141 isolated by Le Barz et al. from fermented milk products and human feces, respectively, were used to interfere with mice fed with high-fat diet and it was found that they could effectively alleviate the onset of obesity and reduce the content of liver fat in mice (Le Barz et al., 2019).
Lactobacillus plantarum KLDS1.0344 and L. plantarum KLDS1.0386 were isolated from traditional fermented dairy products in Inner Mongolia, China, and preserved in our laboratory. In addition, our previous studies have demonstrated that L. plantarum KLDS1.0344 and L. plantarum KLDS1.0386 have strong acid and bile salt resistance, high cell adhesion activities and lipid metabolism regulation properties (Tang et al., 2016;Jin et al., 2017;Lu et al., 2018;Yan et al., 2019;Yue et al., 2019). Hence, we investigated whether a mixture of L. plantarum KLDS1.0344 and L. plantarum KLDS1.0386 could prevent obesity. The results showed that the combined treatment of L. plantarum KLDS1.0344 and L. plantarum KLDS1.0386 could significantly improve some obesity-related indicators in high fat diet-fed mice, including body weight gain, Lee' s index, and body fat rate, etc. which established its effect in inhibiting obesity . However, the mechanism of the combined treatment of L. plantarum KLDS1.0344 and L. plantarum KLDS1.0386 to inhibit obesity was still unclear. Therefore, the aim of this study was to further explore whether the combined intervention of L. plantarum KLDS1.0344 and L. plantarum KLDS1.0386 prevented obesity by reducing liver lipid accumulation and regulating lipid metabolism and gut microbiota.
Preparation of Bacterial Strains
Lactobacillus plantarum KLDS1.0344 and L. plantarum KLDS1.0386 were inoculated in De Man Rogosa Sharpe (MRS) broth, respectively, according to 2% inoculation amount, cultured at 37 • C, subcultured every 24 h, cultured in the third generation to 18 h, then centrifuged at 2500 g at 4 • C for 10 min to harvest bacterial cell, and washed with a sterile phosphate buffered saline (PBS) for three times. The washed L. plantarum KLDS1.0344 and L. plantarum KLDS1.0386 were separately resuspended with sterile PBS to reach a concentration of 5 × 10 8 CFU/mL, and then the two were mixed at a ratio of 1: 1. Bacteria were freshly prepared daily during the 8-week experiment.
Experimental Animals
Male Specific Pathogen-Free (SPF) C57BL/6J mice, aged 21-28 days, were provided by Beijing Vital River Laboratory Animal Technology Co. Ltd. (Beijing, China) (Approval No. SCXK (JING) 2012-0001). These mice were housed in plastic cages under environmentally controlled conditions (temperature, 20-22 • C; relative humidity, 50 ± 10%; and lighting, 12 h light/12 h dark cycle) and given free access to standard diet and water to acclimate to the environment for a week before the start of the experiment. The animal experiment protocol was approved by the Institutional Animal Care and Use Committee of the Northeast Agricultural University under the approved protocol number Specific pathogen free rodent management (SRM)-06.
Experimental Design
After one week of acclimatization, the mice were randomly assigned to the following three groups: control group (Control), high fat diet group (HFD) and mixed lactobacilli group (MX), with 8 mice in each group. Mice in the control group were fed D12450B control diet, while the others were fed D12492 high fat diet. The feed was manufactured and supplied by Beijing Keao Xieli Feed Co. Ltd. (Beijing, China), and its formula is shown in Supplementary Table S1. From 9: 00 to 11: 00 am every day, the control group and the high fat diet group mice were gavaged with 0.2 mL of sterile PBS solution, and for the mixed lactobacilli group, the mice was administered with 0.2 mL of the mixed lactobacilli suspension (10 8 CFU). During the whole experiment, the padding and water were changed twice a week, and the high-fat diet was changed every day to prevent the oxidation of fat to produce odor which affected the mice to eat. The entire experiment lasted for 8 weeks. Eight weeks later, all mice were removed from diet for 12 h and then were anesthetized and sacrificed. The serum samples were obtained via centrifugation at 1500 g for 10 min at 4 • C, and stored at −80 • C for further analysis. Livers and epididymal fat pads were stored at −80 • C. The cecal contents of the mice were collected into sterile tubes and stored at −80 • C for later use.
Determination of TC, TG, LDL-C and HDL-C in Serum
The concentration of TC, TG, LDL-C and HDL-C in serum of each group of mice were measured with the commercial assay kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer's protocols. The final data of TC, TG, LDL-C and HDL-C are reported as mmol/L.
Determination of ALT and AST in Serum
The activity of ALT and AST in serum of each group of mice were measured with the commercial assay kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer's protocols. The levels of ALT and AST are reported as U/L.
Determination of TG, Antioxidant Enzymes, GSH and MDA in the Liver
Liver homogenates were centrifuged at 1150 g for 10 min at 4 • C and the protein concentration was measured by bicinchoninic acid (BCA) method using a commercial kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). Then, the levels of TG, catalase (CAT), superoxide dismutase (SOD), glutathione peroxidase (GSH-Px), GSH and MDA in the mouse livers were measured using commercially available kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) following the manufacturer's instructions.
Expression of Related Genes in Epididymal Fat Pads
To determine the expression of key genes for lipid metabolism in epididymal fat pads, mRNA levels of adenosine 5monophosphate-activated protein kinase-α (AMPK-α), hormone-sensitive lipase (HSL), peroxisome proliferatorsactivated receptor-γ (PPAR-γ), CAATT/enhancer binding proteins α (C/EBPα), fatty acid synthetase (FAS) and acetyl CoA carboxylase (ACC) were assessed by quantitative real-time polymerase chain reaction (qPCR). Total RNA of the tissues was extracted with TRIzol reagent (Invitrogen, Carlsbad, CA, United States) according to the manufacturer's instructions. The mRNA was reverse transcribed into cDNA using the PrimeScript TM RT reagent Kit with gDNA Eraser (Takara, Otsu, Japan). Quantitative real-time PCR was performed using reverse-transcribed cDNA as a template, and ABI7500 fluorescence quantitative PCR instrument (Applied Biosystems, Woolston, Warrington, United Kingdom) and SYBR Green PCR Master Mix (Applied Biosystems, Woolston, Warrington, United Kingdom) were used according to the manufacturer's protocols. Specific forward and reverse primer sequences for quantitative real-time PCR are listed in Supplementary Table S2. All reactions were performed in triplicate. Relative quantification of gene expression were analyzed with the 2 − Ct method. The target gene levels were calculated relative to β-actin and the data were shown as fold changes.
Analysis of Gut Microbiota
Total bacterial DNA of the cecal contents in each group (n = 3) was extracted using the E.Z.N.A. R Stool DNA Kit (Omega Bio-Tek, Norcross, GA, United States) according to the manufacturer's instructions. The extracted DNA was checked by agarose gel electrophoresis and quantified using a NanoDrop ND-2000 spectrophotometer (Thermo Fisher Scientific, Wilmington, DE, United States). The V3-V4 hypervariable region of the bacterial 16S rDNA was amplified by PCR using forward primer 338F (5 -ACTCCTACGGGAGGCAGCAG-3 ) and reverse primer 806R (5 -GGACTACHVGGGTWTCTAAT-3 ). The products of PCR were purified with AxyPrep DNA Gel Extraction Kit (Axygen Bioscience, Union City, CA, United States), quantified using Qubit 2.0 Fluorometer (Life Technologies, Carlsbad, CA, United States), and pooled in equimolar ratios. PCR amplicons sequencing was performed on Illumina Miseq platform (Illumina Inc. San Diego, CA, United States) following the standard protocols. The resulting raw reads were merged with FLASH software (V1.2.11) (Magoč and Salzberg, 2011) and quality-filtered using QIIME software (V1.7.0) (Caporaso et al., 2010). High-quality clean tags were obtained by using the UCHIME algorithm to identify and remove the chimeric sequences (Edgar et al., 2011). The tags with nucleotide 97% sequence identity were clustered into the same operational taxonomic units (OTUs) using Uparse software (V7.0.1001) (Edgar, 2013). These OTUs were subjected to analysis using the Greengenes database by PyNAST software (Version 1.2) and were annotated to taxonomic information (Yilmaz et al., 2013). The species abundance of microorganisms in the three groups at phylum and genus levels were compared. Linear Discriminant Analysis Effect Size (LEfSe) was used to identify potential microbial biomarkers associated with different treatments with an effect size threshold of 2 (Segata et al., 2011).
Determination of Short Chain Fatty Acids (SCFAs) in the Intestine
In order to determine the level of SCFAs in the intestine, 50 mg the cecal contents were mixed in 0.3 mL pure water, treated with a ball mill at 45 Hz for 4 min, ultrasonically treated in ice water bath for 5 min, and centrifuged at 5000 g for 20 min at 4 • C. The supernatant (0.2 mL) and 0.3 mL pure water were mixed evenly, treated with a ball mill at 45 Hz for 4 min, ultrasonically treated in ice water bath for 5 min, and centrifuged at 5000 g for 20 min at 4 • C. After that, 0.3 mL the supernatant was uniformly mixed with the original 0.2 mL the supernatant, and 0.1 mL 50% H 2 SO 4 and 0.4 mL the internal standard solution (2-Methylvaleric acid dissolved in diethyl ether to 50 µg/mL) was added. Next, the sample was centrifuged at 12000 g for 15 min at 4 • C and allowed to stand at −20 • C for 30 min. The supernatant was transferred to a sample vial and detected by gas chromatography-mass spectrometry (GC-MS). GC-MS detection was performed using an Agilent 7890 gas chromatography mass spectrometer equipped with an Agilent HP-FFAP capillary column (30 m × 250 µm × 0.25 µm, J&W Scientific, Folsom, CA, United States). Specific chromatographic conditions were used with reference to the method previously described (Zheng et al., 2013). Acetic acid,butyric acid, propionic acid, valeric acid (Sigma, St. Louis, MO, United States) were used as the standards. The concentration of each SCFA was determined according to a standard curve obtained from seven different concentrations of standards.
Statistical Analysis
All experiments were performed with at least three replicates and all experimental data were displayed as mean ± standard deviation (SD). Analysis of the data was carried out using SPSS 20.0 software (SPSS Inc. Chicago, IL, United States). Statistical differences among groups were determined using one-way analysis of variance (ANOVA), followed by Duncan's multiple range test. The Spearman's rank correlation coefficients between the relative abundance of mixed lactobacillimanipulated gut microbiome and obesity-related indicators were determined using R software 3.4.1 for correlational statistical analysis. P-values < 0.05 were considered to be statistically significant.
Effect of Mixed Lactobacilli on Blood Lipids
The blood lipid levels of the three groups of mice are shown in Figure 1. Compared with the control group, mice in the HFD group showed a dramatical increase in the serum levels of TG, TC, and LDL-C and significant decrease in the serum level of HDL-C (p < 0.01), indicating that abnormal blood lipid metabolism was occurred in the HFD group. Conversely, oral administration of mixed lactobacilli markedly inhibited these changes of lipid parameters in the serum (p < 0.01).
Effect of Mixed Lactobacilli on Liver Function
Serum ALT and AST levels, which are commonly used as indicators for evaluating liver function, were measured. As shown in Figures 2A,B, in a comparison between the control and HFD groups, the HFD group showed significantly higher levels of ALT and AST in serum than that of the control group (p < 0.01), indicating liver damage in these mice. Notably, treatment with mixed lactobacilli significantly reduced serum ALT and AST levels induced by high fat diet (p < 0.01).
Effect of Mixed Lactobacilli on Lipid Accumulation in the Liver
To determine lipid accumulation in the liver of mice, we examined the levels of TG in the liver of the three groups of mice. As shown in Figure 3, the liver TG concentration was pronouncedly higher in the HFD group when compared with the control group (p < 0.01). However, the mixed lactobacilli administration markedly decreased the high levels of TG induced by high fat diet (p < 0.01).
Effect of Mixed Lactobacilli on Oxidative Stress in the Liver
The levels of antioxidant enzymes, GSH and MDA in the liver of the mice were measured. As depicted in Figures 4A-D, the levels of CAT, SOD, GSH-Px and GSH in the liver of the mice of the HFD group were significantly decreased as compared with that of the control group (p < 0.01). However, these reduced levels were significantly elevated by mixed lactobacilli supplementation (p < 0.01). Furthermore, mixed lactobacilli treatment significantly reduced liver MDA levels induced by high fat diet ( Figure 4E, p < 0.01). These findings suggested that mixed lactobacilli administration could enhance the antioxidant capacity of the mice liver.
Effect of Mixed Lactobacilli on Key Genes of Lipid Metabolism in Epididymal Fat Pads
The results and comparisons of mRNA expression of key genes for lipid metabolism in the epididymal fat pad are illustrated in Figure 5. As shown in Figures 5A,B, high fat diet feeding caused a significant down-regulation of mRNA levels of AMPKα and HSL in the epididymal fat pad as compared to the control group (p < 0.01), which was normalized by the mixed lactobacilli supplementation. In addition, as shown in Figures 5C-F, the mRNA expression levels of ACC, FAS, PPAR-γ, and C/EBP-α in the epididymal fat pad of the mice in the HFD group were significantly higher than those in the control group (p < 0.01 or p < 0.05). However, after treatment with the mixed lactobacilli, the mRNA expression levels of ACC, FAS, and PPAR-γ were significantly decreased (p < 0.01), and the mRNA expression level of C/EBP-α was also decreased but not significant (p > 0.05).
Effect of Mixed Lactobacilli on the Gut Microbiota Composition
To investigate whether mixed lactobacilli have an important role in the bacterial communities of high fat diet-fed mice, the cecal gut microbiota of the mice was analyzed by sequencing the 16S rDNA variable region V3-V4. At the phylum level, the dominant components in all groups were Firmicutes and Bacteroidetes, with a ratio of more than 85% (Figure 6A). Compared to the control group, the HFD group exhibited a higher relative abundance of Firmicutes and a lower relative abundance of Bacteroides, representing 79.67 and 9.43%, respectively. Consistently, a significant increase in the ratio of Firmicutes to Bacteroides in the HFD group was observed compared to the control group (p < 0.01, Table 1). However, mixed lactobacilli treatment attenuated the increase in Firmicutes, the decrease in Bacteroidetes and the increase in Firmicutes-to-Bacteroidetes ratio induced by high fat diet. The distribution of gut microbiota at the genus level in different groups was shown by the genera abundance heatmap ( Figure 6B). Compared with the control group, the relative abundances of Bifidobacterium, Bacteroides, Alistipes, Lachnospiraceae NK4A136 group and Alloprevotella were decreased in the HFD group but the relative abundances of Parabacteroides, Eubacterium xylanophilum group, GCA-900066575, Lachnoclostridium, Lachnospiraceae UCG-006 and Romboutsia were increased, all of which were inhibited by mixed lactobacilli supplementation ( Figure 6B). Collectively, these results implied that mixed lactobacilli consumption clearly modulated the taxonomic composition of the intestinal flora of mice fed with high fat diet.
To identify predominant microbiota in each group, LEfSe analysis was performed. The resulting cladogram (Figure 7) disclosed that Bacteroidetes, Alloprevotella and Alistipes were more dominant in the control group than the other two groups. The HFD group was enriched with Firmicutes, Lachnospiraceae UCG-006, Lachnoclostridium, Romboutsia, Parabacteroides, GCA-900066575 and Eubacterium xylanophilum group, while the MX group was enriched with Lachnospiraceae NK4A136 group and Bacteroides. The histogram of the Latent Dirichlet Allocation (LDA) scores (Figure 8) further revealed a clear difference between the control, HFD and MX groups in terms of the composition of biological clades, which was in agreement with the aforementioned results.
Effect of Mixed Lactobacilli on SCFAs
To explore changes in SCFAs metabolism in the intestine, the levels of acetic acid, butyric acid, propionic acid and valeric acid in the cecal contents of different groups of mice were determined by GC-MS (Supplementary Figure S1). As shown in Table 2, the levels of acetic acid and butyric acid were strikingly decreased in the HFD group compared with the control group (p < 0.01), whereas supplement with mixed lactobacilli significantly increased the levels of the two SCFAs (p < 0.05 and p < 0.01). However, after 8 weeks of mixed lactobacilli treatment, the levels of propionic acid and valeric acid did not change significantly.
Correlation Between the Gut Microbiome and Obesity-Related Indicators
The correlations between the relative abundances of the dominant gut microbial community at the genus level (the top 35 genera according to the relative abundance) and obesity-related indicators were determined by Spearman's correlation analysis (Figure 9). Obesity-related indicators included short chain fatty acids (acetic acid, butyric acid, propionic acid and valeric acid), genes of epididymal adipose tissue (AMPK-α, HSL, ACC, FAS, PPAR-γ, and C/EBP-α), hepatic parameters (TG, CAT, SOD, GSH-Px, GSH, MDA, ALT, and AST) and blood lipids (TG, TC, HDL-C, and LDL-C). The relative abundances of Bifidobacterium, Bacteroides, Alistipes, Lachnospiraceae NK4A136 group and Alloprevotella were positively correlated with the levels of acetic acid and butyric acid, while the relative abundances of Parabacteroides, Eubacterium xylanophilum group, GCA-900066575, Lachnoclostridium, Lachnospiraceae UCG-006 and Romboutsia were negatively correlated with the levels of acetic acid and butyric acid. Moreover, the relative abundances of Bifidobacterium, Alistipes and Alloprevotella showed significant positive correlations with the mRNA expression levels of AMPK-α and HSL (p < 0.01 or p < 0.05) and significant negative correlations with the mRNA expression levels of ACC, FAS and PPAR-γ (p < 0.01 or p < 0.05), but the relative abundances of Parabacteroides, GCA-900066575, Lachnoclostridium and Lachnospiraceae UCG-006 showed the opposite trend (p < 0.01 or p < 0.05). In addition, Bifidobacterium, Bacteroides, Alistipes, Lachnospiraceae NK4A136 group and Alloprevotella were negatively correlated with TG and MDA in the liver and TG, TC, LDL-C, ALT and AST in the serum, whereas they were positively correlated with CAT, SOD, GSH-Px and GSH in the liver and HDL-C in the serum, but Parabacteroides, Eubacterium xylanophilum group, GCA-900066575, Lachnoclostridium, Lachnospiraceae UCG-006 and Romboutsia were opposite.
DISCUSSION
Obesity, a metabolic disease, is becoming more common and is prone to other metabolic complications (such as cardiovascular disease and type 2 diabetes) (Sonnenburg and Backhed, 2016;Dahiya et al., 2017), pharmacotherapy causes adverse effects (Srivastava and Apovian, 2018; Rosa-Gonçalves and Majerowicz, 2019), so it is imperative to seek natural and non-toxic antiobesity substances. Lactic acid bacteria have been used in fermented dairy products for more than 100 years (Aryana and Olson, 2017) and are generally regarded as safe (GRAS) (Özogul and Hamed, 2017). Moreover, to date, a large amount of evidence has shown that some lactic acid bacteria have an effective anti-obesity effect in animal research and clinical research (Dahiya et al., 2017). Previous researches in our laboratory have demonstrated that L. plantarum KLDS1.0344 and L. plantarum KLDS1.0386 have strong acid and bile salt resistance, high cell adhesion activities and lipid metabolism regulation properties (Tang et al., 2016;Jin et al., 2017;Lu et al., 2018;Yan et al., 2019;Yue et al., 2019). Therefore, further researches were carried out and it was found that mixed lactobacilli (L. plantarum KLDS1.0344 and L. plantarum KLDS1.0386) could prevent the formation of obesity in high fat diet-fed mice , which confirmed the probiotic properties of the mixed lactobacilli. However, the underlying mechanisms were unclear. Thus, in this study, the possible mechanisms by which the same lactobacilli strains could prevent obesity were mined.
The anti-obesity effect of mixed lactobacilli in vivo was studied using the D12492 high fat diet-induced obesity model, which is a widely used model for obesity research. Convincing evidence has demonstrated that obesity is often accompanied by dyslipidemia, such as elevated levels of TC, TG, and LDL-C, as well as decreased HDL-C levels, which are risk factors for cardiovascular disease (Hunter and Hegele, 2017;Kotsis et al., 2017). Thus, after 8 weeks of animal feeding, the levels of TC, TG, LDL-C and HDL-C in the serum of the three groups of mice were measured to evaluate their blood lipid metabolism. Our data indicated that compared with the control group, the serum levels of TC, TG and LDL-C were significantly increased, and the serum levels of HDL-C were significantly decreased in the HFD group, as expected. However, such changes in mice fed a high fat diet were reversed by the mixed lactobacilli treatment, implying an improvement in metabolic dysfunction. The findings were in agreement with previous research that a probiotic L. plantarum strain isolated from the homemade kumiss could effectively inhibit serum TC, TG, LDL-C and HDL-C changes from feeding with high fat diet (Wang et al., 2012).
Obesity is frequently characterized by the development of non-alcoholic fatty liver disease (NAFLD) (Michelotti et al., 2013;Canfora et al., 2019). The liver, an important site of lipid metabolism in the body, maintains the balance of lipid synthesis and decomposition under normal conditions, whereas a highfat diet breaks this balance, causing excessive lipid accumulation (that is, hepatic steatosis) and oxidative stress in the liver, indicating the occurrence of liver injury (Rotman and Sanyal, 2017;Friedman et al., 2018). On this account, we aimed to determine the preventive effect of mixed lactobacilli intervention on NAFLD in high fat diet-fed mice. First, hepatic lipid accumulation was evaluated by measuring TG concentration. A significant elevated TG concentration in the liver was observed in the HFD group, which was consistent with the large number of lipid droplets in liver histopathological sections of the HFD group observed in our previous studies , implying that hepatic steatosis occurred. These results were in agreement with the earlier reports in which, obese mice suffered from non-alcoholic hepatic steatosis (Liou et al., 2019). However, it is noteworthy that mixed lactobacilli treatment substantially attenuated hepatic steatosis. Second, we assessed liver oxidative stress by analyzing antioxidant indices and lipid peroxidation biomarkers, including SOD, CAT, GSH-Px, GSH, and MDA. SOD, a critical antioxidant, can convert superoxide radical anions (O 2 − , incompletely reduced forms of oxygen) into hydrogen peroxide (H 2 O 2 ), which in turn is catalyzed into water by CAT and GSH-Px (Turrens, 2003;Borrelli et al., 2018). GSH as a non-enzymatic antioxidant can directly scavenge reactive oxygen species (ROS) by binding with them (Anu et al., 2014). MDA is the end product of free radical-mediated lipid peroxidation and is currently considered a reliable biomarker related to oxidative stress (Wang et al., 2013). Our results demonstrated that oral administration of mixed lactobacilli significantly increased levels of SOD, CAT, GSH-Px, and GSH while significantly reducing MDA levels in high fat diet-fed mice, suggesting amelioration of liver oxidative stress. Finally, serum ALT and AST levels, often used to determine the extent of liver function damage (Zhao et al., 2017a), were measured. We found that serum levels of ALT and AST were significantly decreased in the MX group. Similarly, it has been previously stated that the treatment of the probiotic mixture (6 Lactobacillus and 3 Bifidobacterium) reduced serum ALT and AST levels in high fat diet-fed rats (Liang et al., 2018). Taken together, the mixed lactobacilli could inhibit liver lipid accumulation, enhance liver antioxidant capacity and improve liver function.
To further explore the potential mechanisms by which mixed lactobacilli inhibited obesity induced by high fat diet in mice, we examined the expression of lipid metabolism-related genes in epididymal adipose tissue. Adipose tissue, as one of the main sites for storing triglycerides, is an important organ regulating lipid metabolism (Schneeberger et al., 2015;Park et al., 2017). Extensive amounts of reports have shown that PPAR-γ and C/EBP-α are key transcription factors for adipocyte differentiation in adipose tissue (Prestwich and Macdougald, 2007). Park et al. (2014) demonstrated that L. plantarum LG42 supplementation strikingly decreased mRNA expression of PPAR-γ and C/EBP-α in high fat diet-fed mice. In this study, compared with HFD group, mixed lactobacilli treatment significantly down-regulated PPAR-γ mRNA levels, and also lowered the mRNA level of C/EBP-α but not significant. The AMPK pathway is a classical pathway that regulates lipid metabolism. AMPK is a known cellular energy sensor that shuts down anabolic pathways such as fatty acid synthesis (Hardie and Pan, 2002). ACC and FAS are key enzymes in fatty acid synthesis (Liou et al., 2019). Specifically, activation of AMPK-α stimulates ACC phosphorylation, which blocks the expression of FAS (Liou et al., 2019). HSL is the rate-limiting enzyme in the breakdown of triglycerides in adipose tissue (Liou et al., 2019). In the present study, the mRNA levels of AMPK-α and HSL were remarkably reduced in the HFD group compared with the control group, while the mRNA levels of ACC and FAS were significantly elevated, and these changes were completely eliminated by the treatment of mixed lactobacilli, similar to the study by Qiao et al. (Qiao et al., 2015).
Accumulating evidence suggests that the gut microbiota is a key environmental factor in the development of obesity (Walker and Julian, 2013). For instance, a previous study showed that germ-free, lean mice transplanted with intestinal microbiota from obese mice became obese, while those transplanted with intestinal flora from lean mice remained lean (Turnbaugh et al., 2006). Accordingly, gut microbiota is considered as a new target for the prevention and treatment of obesity. Subsequently, to investigate whether the mixed lactobacilli exerted its antiobesity effects also through regulating the gut microbiota, we determined the intestinal bacteria composition of the three groups of mice. At the phylum level, we observed that the relative abundance of Firmicutes increased and the relative abundance of Bacteroidetes decreased in the HFD group compared with that in the control group, resulting in an significant increase in the Control, control group; HFD, high fat diet group; and MX, mixed lactobacilli group. All data are represented as mean ± SD. *p < 0.05 and **p < 0.01: significant difference compared with mice in the HFD group.
Lachnoclostridium (Zhao et al., 2017b), Lachnospiraceae UCG-006 (Kang et al., 2019), and Romboutsia (Zhao et al., 2018) were positively correlated with obesity. Bifidobacterium, a beneficial microbial species, producing lactic acid and acetic acid, can reduce intestinal pH and inhibit the growth of various detrimental bacteria to maintain intestinal health (Wang R. et al., 2018). Bacteroides, Alistipes, Lachnospiraceae NK4A136 group and Alloprevotella are also capable of producing SCFAs such as acetic acid and butyric acid (Borton et al., 2017;Gotoh et al., 2017;Jiang et al., 2018;Yin et al., 2018). It was reported that the abundance of Bacteroides was reduced in individuals with atherosclerotic cardiovascular disease (Jie et al., 2017) and post-inflammatory irritable bowel syndrome . Alistipes can effectively inhibit inflammation via preventing LPS-induced TNF-α release at higher concentrations (Canfora et al., 2015) and has been found at lower levels in the gut of patients with hepatocellular carcinoma (Ren et al., 2019), colitis (Jiang et al., 2018) and non-alcoholic fatty liver disease (Tang et al., 2018). Previous studies have indicated that Lachnospiraceae NK4A136 group may have an anti-inflammatory effect and its relative abundance was reduced in mice with immune dysfunction caused by cyclophosphamide . Alloprevotella has been shown to be significantly reduced in mice with metabolic syndrome (Shang et al., 2017), a disease easily caused by obesity. According to previous studies, Parabacteroides was enriched in individuals with type 2 diabetes (Qin et al., 2012) and Behcet's disease (Ye et al., 2018). Eubacterium xylanophilum group, GCA-900066575, Lachnoclostridium, Lachnospiraceae UCG-006 and Romboutsia belong to Lachnospiraceae, which may suppress the growth of SCFA-producing bacteria (Duncan et al., 2002) and were found to be associated with metabolic disorders and colon cancer (Keishi and Kikuji, 2014;Meehan and Beiko, 2014). Importantly, the mixed lactobacilli treatment reversed the changes in several of the above genus. Changes in the gut microbiota cause changes in the SCFAs levels, which are negatively correlated with obesity (Coelho et al., 2018). have shown that supplementation of SCFAs in the diet dramatically inhibited the body weight gain in high fat diet-fed mice. SCFAs produced by bacteria may induce the release of gut-derived satiety hormones, such as peptide YY (PYY) and glucagon-like peptide-1 (GLP-1), which suppress food intake and increase satiety (Canfora et al., 2015). Furthermore, studies have indicated that SCFAs can reach the liver through the portal vein, thereby activating the nuclear erythroid 2-related factor 2 (Nrf-2) pathway to alleviate oxidative stress and activating the AMPK pathway to inhibit lipid accumulation (Canfora et al., 2015). In addition, SCFAs have been reported to regulate lipid metabolism in adipose tissue by modulating related signaling pathways (Gijs et al., 2015). Therefore, we determined the levels of acetic acid, butyric acid, propionic acid and valeric acid in each group by GC-MS. Intriguingly, compared to the HFD group, mixed lactobacilli supplementation significantly increased the concentrations of acetic acid and butyric acid, which was in agreement with changes in the intestinal microbiota. Based on the above results, we speculated that the mixed lactobacilli treatment might attenuate liver oxidative stress, reduce liver lipid accumulation and improve lipid metabolism of adipose tissue by regulating intestinal microbiota and metabolites. In line with our finding, Yin et al. (2018) concluded that one of the potential mechanisms of melatonin inhibiting obesity might related to the increased levels of acetic acid and butyric acid.
To further confirm the role of gut microbiota in anti-obesity, we also analyzed the correlation between the relative abundance of mixed lactobacilli-manipulated gut microbiota and obesityrelated indicators through Spearman's correlation analysis. In general, Bifidobacterium, Bacteroides, Alistipes, Lachnospiraceae NK4A136 group and Alloprevotella were positively correlated with acetic acid and butyric acid in intestine, AMPK-α and HSL in epididymal adipose tissue, CAT, SOD, GSH-Px and GSH in liver, and HDL-C in serum, while they were negatively correlated with ACC, FAS and PPAR-γ in epididymal adipose tissue, TG and MDA in liver, and TG, TC, LDL-C, ALT and AST in serum, but Parabacteroides, Eubacterium xylanophilum group, GCA-900066575, Lachnoclostridium, Lachnospiraceae UCG-006 and Romboutsia displayed the opposite trend. Therefore, the results also indicated that mixed lactobacilli administration could effectively modulate the gut microbiota induced by high fat diet, and then improve obesity-related indicators.
CONCLUSION
The mixed lactobacilli intervention could alleviate the changes induced by high fat diet including disordered blood lipids, liver oxidative stress and liver injury. Further, the mixed lactobacilli intervention modulated the gut microbiota of high fat diet-fed mice, resulting in increased SCFAs (acetic acid and butyric acid), which regulated lipid metabolism in adipose tissue and reduced liver lipid accumulation, thereby preventing obesity. Hence, our results offer significant insight into the oral administration of mixed lactobacilli to suppress obesity in high fat diet-fed mice.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/Supplementary Material.
ETHICS STATEMENT
The animal study was reviewed and approved by the Institutional Animal Care and Use Committee of the Northeast Agricultural University. | 8,145.2 | 2020-03-26T00:00:00.000 | [
"Medicine",
"Biology",
"Environmental Science"
] |
A Conciliatory Answer to the Paradox of the Ravens
In the Paradox of the Ravens, a set of otherwise intuitive claims about evidence seems to be inconsistent. Most attempts at answering the paradox involve rejecting a member of the set, which seems to require a conflict either with commonsense intuitions or with some of our best confirmation theories. In contrast, I argue that the appearance of an inconsistency is misleading: ‘confirms’ and cognate terms feature a significant ambiguity when applied to universal generalisations. In particular, the claim that some evidence confirms a universal generalisation ordinarily suggests, in part, that the evidence confirms the reliability of predicting that something which satisfies the antecedent will also satisfy the consequent. I distinguish between the familiar relation of confirmation simpliciter and what I shall call ‘predictive confirmation’. I use them to formulate my answer, illustrate it in a very simple probabilistic model, and defend it against objections. I conclude that, once our evidential concepts are sufficiently clarified, there is no sense in which the initial claims are both plausible and inconsistent.
Introduction
In the Paradox of the Ravens (PR) a number of plausible claims about confirmation seem to commit us to an excessively broad analysis of evidence, such that discovering non-black non-ravens (or learning sentences reporting them) confirms the hypothesis that 'All ravens are black' whenever it also confirms 'All non-black things are non-ravens'. 1 The challenge is either to reject one of these claims or to learn to live with this paradoxical result. The PR has been a persistent thorn in the side of confirmation theory: if our systems force us to make seemingly bizarre claims about confirming simple universal generalisations, then how can they have authority in more advanced applications? Many answers to the PR 1 3 either reject the confirmation theories with this result or the intuitions that conflict with it. In contrast, I shall argue that both the apparently paradoxical claims and our intuitions are correct, because our intuitions are not about the type of evidential relation that confirmation theorists are explicating. My answer is 'conciliatory' in the sense that both those confirmation theorists who accept the 'paradoxical' results associated with the PR and those who reject them are both correct, but each group is only correct for one of the two different types of evidential relation. The PR is a misunderstanding caused by an ambiguity in terms like 'confirms' and 'evidence' when applied to universal generalisations.
Ordinarily, when we discuss positive evidential support for universal generalisations, there is a pragmatic implication that the confirming evidence makes it more reliable to infer that something satisfying the antecedent will also satisfy the consequent. When the evidence does so, it provides what I call 'predictive confirmation'. This type of evidential support comes apart from confirmation simpliciter in the PR and this creates the appearance of paradox.
In Sect. 2, I briefly discuss the PR, its scope, and categorise the literature. In Sect. 3, I define predictive confirmation. I offer my answer to the PR in Sect. 4. I finish by considering some objections in Sect. 5.
The Paradox of the Ravens
Imagine Janina, a logician who is considering the concept of confirmation. Suppose that she has background information B. Let X and Y be some expressions: they could be simple ascriptions of predicates, but they could also be a complex matrix such as a connected series of predicates, an existential statement, a statement using relations like 'less than', modal operators, or a combination of these expressions. Consider an X and a Y such that Janina believes the following four claims: 1. 'All X are Y' and 'All ¬Y are ¬X' are logically equivalent. This claim is sometimes called the "Scientific Laws Condition" (Swinburne 1971, 318). 2. Whatever confirms a hypothesis H, relative to some background information, also confirms any hypothesis that is logically equivalent to H. This is sometimes called the "Equivalence Condition" (Hempel 1945, 12). 3. The hypothesis 'All ¬Y are ¬X' is confirmed (relative to background knowledge B) by the information that all of a sample that each satisfy the expression ¬Y also satisfy the expression ¬X. This is an instance, for X and Y, of the Nicod Criterion, which states that 'All Φ are Ψ' is always confirmed for any expressions Φ and Ψ. 2 4. The hypothesis 'All X are Y' is not confirmed, relative to B, by evidence in favour of the hypothesis that all members of a sample that each satisfy the expression ¬Y also satisfy the expression ¬X.
3
As these four claims are inconsistent, Janina must abandon at least one. 3 For example, if she believes that (a) 'This non-black thing (such as a white thing) is a non-raven (such as a shoe)' confirms (b) 'All non-black things are non-ravens', but also believes that (a) does not confirm the hypothesis (c) 'All ravens are black', then she cannot consistently also believe the Scientific Laws Condition and the Equivalence Condition. Summarised in a sentence, my answer to this antimony is that the PR trades on an equivocation, and we should disambiguate 'confirms' so that (4) is false on one interpretation and (2) is false on the other interpretation. There is no correct interpretation of (1)-(4) that generates a set of claims that are individually plausible but collectively inconsistent.
Three General Approaches to the Paradox
To provide a detailed map of the huge tangled forest of the PR literature is beyond this article's scope. However, most answers fall into one of three categories: A. The ordinary intuitions are more or less correct: the mistake lies with confirmation theorists who fail to appreciate some crucial condition of evidential support for universal generalisations, like natural kinds or degrees of naturalness. The answers of Quine (1970) and Rinard (2014) are examples. B. The ordinary intuitions are incorrect. Typically, they are explained as products of a cognitive illusion, in which people confuse a very small degree of confirmation with no confirmation. The first attempt was by Hosiasson-Lindenbaum (1940); this approach perhaps reaches its apex with the extremely impressive formal analysis by Fitelson and Hawthorne (2010). C. The intuitions are correct, but there is also nothing wrong with the confirmation theories that seem to contradict them. According to this answer, there are multiple types of confirmation relations: the intuitions are about a concept of favourable evidence that is distinct from confirmation simpliciter that Hempel (and most other confirmation theorists) have attempted to analyse. In the existing versions of this approach to the PR, the additional type of confirmation relation is claimed to be 'selective' confirmation, which requires that evidence (1) provides confirmation simpliciter for the hypothesis and (2) disconfirms a rival, in some suitable sense of 'rival'. Allegedly, the report 'This raven is black' selects in favour of 'All ravens are black' against rival hypotheses like 'All ravens are not black' (relative to the implicit background knowledge in the PR) whereas 'This non-black thing is a non-raven' is consistent with either 'rival'. This strategy originates with Goodman (1954, 72) while Glymour (1980, 157-160) provides a more sophisticated version.
Each of these approaches has its strengths, but also its challenges. For example, for (A) there is the worry that many confirmation theorists have become quite comfortable with the PR results. Have these confirmation theorists really lost touch with the relevant concepts of naturalness? If this is really such a foundational concept in confirmation theory, it is puzzling that so many philosophers are capable of ignoring its effects in the PR case. Arguably, philosophers' intuitions are not worth much, but ceteris paribus it would 1 3 be preferable to do justice to them. Furthermore, it is not obvious that the strangeness in the PR is that the evidence fails to give stronger reasons for believing the hypothesis, in some sense of 'belief' such as greater numerical credence, acceptability, or some qualitative sense. I agree with (A) that there is something strange about such claims, but it is not clear that it is the same as the strangeness (given our actual background knowledge) of claims like 'A white shoe evinces the hypothesis that almost all ravens are black' or 'A red herring evinces that the next raven that I see will be black', where source of the strangeness is clearly the sense that the evidence fails to give reasons to believe the hypothesis given our background knowledge.
In the case of (B), there is the issue that most people do not have a problem with the claim that non-black non-ravens confirm 'All non-black things are non-ravens'. Yet any evidence has the same degree of confirmation towards logically equivalent hypotheses. Therefore, if (B) is correct, then people would presumably be just as resistant to the claim that non-black non-ravens confirm 'All non-black things are non-ravens' as they are to the claim that non-black non-ravens confirm 'All ravens are black'. This problem was first identified by Scheffler (1968, 284-285). 4 Similarly, as Fisch (1984, 49) points out, many people think that there is something strange about black ravens confirming 'All non-black things are non-ravens'. One might respond that people not only tend to ignore small degrees of confirmation, but also tend not to recognise that 'All ravens are black' is contrapositable into 'All non-black things are non-ravens'. However, (1) this is ad hoc, (2) it is implausible that anyone would be ignorant of basic deductive relations and have a sense of degrees of confirmation that is analogous to the (many) Bayesian explications of this concept, and (3) knowledge of the contrapositability does not seem, in itself, to remove the strangeness of the PR. (This last point is evinced by philosophers of type (A), because it would be absurd to claim that Quine was ignorant of the contrapositability of universal generalisations!) There is also the concern that, provided that the degree of confirmation for each unobserved non-raven towards 'All ravens are black' is a non-zero real number, there will be some quantity of non-black non-ravens (say, the discovery of a trillion stars in a newly detected group of galaxies) which provide stronger evidence for 'All ravens are black' than discovering a raven, and this seems no less paradoxical than the original PR scenario. Degrees of confirmation are brilliant additions to formal epistemology, but it is unproven that they can explain away the PR.
For (C), the principal problem is identifying a sense of 'confirmation' that would be plausible as an interpretation of 'Non-black non-ravens confirm the hypothesis that all ravens are black', but also does not simply recreate the paradoxical results. Among the many problems for the selective confirmation approaches to (3), consider the statistical generalisation 'Just 1% of non-black things are non-ravens'. It seems that, under any plausible analysis of 'rival hypotheses', this hypothesis is a rival to 'All ravens are black'. (For example, the hypotheses cannot both be true given our background information; the statistical generalisation is consistent with what we know; and it also meets Glymour's other conditions for rivalrous hypotheses.) Yet the statistical generalisation is presumably disconfirmed, relative to the implicit background knowledge in the PR scenario, by evidence 4 There is a similar problem for those who think that the illusion of paradox is caused by the fact that non-black non-ravens are 'uninteresting' confirmations of 'All ravens are black', because they mean that instances like (Ra → Ba) are only true because "their antecedents are false" (Schock 1984, 349). Why are non-black non-ravens uninteresting when the universal generalisation is formulated as 'All ravens are black', but interesting when formulated as 'All non-black things are non-ravens'? such as a sample report that all of a large sample of non-black things are non-ravens. Therefore, discovering such a sample would provide selective confirmation by confirming 'All ravens are black' simpliciter and disconfirm a rival of this hypothesis, and consequently a report of non-black non-ravens would provide selective confirmation as well as confirmation simpliciter.
My criticisms of (A), (B), and (C) sketched above are brief and inconclusive. There are many, many good responses that could be made by supporters of these answers. Indeed, the current debate seems to be at an impasse, with a plethora of objections for each account that might take decades to evaluate and rigorously address. However, they at least provide some challenges that should be met by any new addition to the vast corpus of the PR's answers.
My answer is a version of (C), but one that is very different from the selective confirmation approach. Akin to them, I shall argue that the key to answering the PR is to distinguish between two different types of confirmation: confirmation simpliciter and what I shall call "predictive confirmation". I shall argue that, when we make this disambiguation, we can come to the conciliatory judgement that the PR is generated by a misunderstanding between two fundamentally correct groups rather than a mistake by either group. The misunderstanding is caused by the fact that confirmation theorists have focused on confirmation simpliciter. (This focus is justifiable, because in the next section it will be apparent that confirmation simpliciter has a vastly wider domain than predictive confirmation.) The Equivalence Condition is true for confirmation simpliciter, but not predictive confirmation. It is true that discovering non-black non-ravens fails to confirm 'All ravens are black' in the predictive sense of 'confirms', under the assumptions of the PR, but not the simpliciter sense of 'confirms'. On neither sense of confirmation are (1)-(4) in Sect. 2.1 all intuitively plausible, and therefore there is no real paradox.
Predictive Confirmation
Before introducing my resolution of the PR, it is necessary to (1) consider some of the pragmatic relations between universal generalisations and predictions, and (2) introduce the concept of predictive confirmation.
Universal Generalisations and Predictions
If I assert that 'All ravens are black', then I seem to be suggesting (in many circumstances) that it is reliable to suppose that, if something was a raven, then it would be black. Yet this hypothesis could be true, given the standard analysis of universal generalisations' semantics, merely because there are no ravens. In the same way, the hypothesis could be probable, given our evidence, merely because of good evidence that there are no ravens. Even when a universal generalisation is vacuously true but intuitively assertable, such as 'All ideal gases satisfy the Ideal Gas Law', we seem to have good reasons to believe that if something were an ideal gas (which would require a universe incompatible with our actual physics) then it would satisfy the law.
Pragmatics offers a means of explaining this divergence of assertability and truth/probability: under typical circumstances, the assertion of a universal generalisation suggests that 1 3 it is reliable to predict an instance of the consequent given an instance of the antecedent. 5 By analogy, most contemporary logicians agree that 'P but Q' has the same semantics as 'P and Q' and yet it clearly has different pragmatics, because it typically suggests a contrast between the fact that P and the fact that Q. 6 The exact role that the associated predictions play in reasoning will depend on the particular epistemology; my answer to the PR will be compatible with an extremely wide range of uses of the associated predictions in different theories of reasoning (for example, Bayesian formal epistemologies versus ampliative inference-rule logics like default logics) provided that they have some role.
Universal generalisations are not the only way of suggesting such predictions by asserting a general hypothesis. Asserting that 'Almost all ravens are black' also suggests, in the absence of defeaters (such as 'This is an Australian raven and almost all Australian ravens are white') that it is reliable to predict that some particular raven will be black. 7 While it is easy to give cases where asserting a universal generalisation is not necessary for recommending predictions in this way, it is difficult to think of realistic cases where they are not sufficient. What I say below will not depend on whether they are necessary.
Beyond these observations, there are other pieces of evidence for the notion that universal generalisations have a pragmatic association with predictions of their consequent terms given their antecedent terms. Firstly, contrapositives can have different pragmatics. This pragmatic asymmetry helps resolve some curiosities about universal generalisations. For example, many people have a feeling that 'All ravens are black' is 'about' ravens, whereas 'All non-black things are non-ravens' is 'about' non-black things. (Two examples are Wright (1966) and Couvalis (1998, 45). Additionally, Hempel (1945, 17) and Lipton (2007, 79) note that many people seem to have this intuition.) I admit that 'aboutness' is anything but precise. Yet this sense could be explained by the idea that the assertion 'All ravens are black' typically recommends predicting that something with ravenness will also have blackness, whereas the contrapositive 'All non-black things are non-ravens' typically recommends predictions from the absence of blackness to the absence of ravenhood. In many contexts, the reliability of these predictive policies will differ: if it was true that just 99% of ravens were black, then inferring from something being a raven to its blackness could be a highly reliable policy, yet it would still be possible (though very surprising!) that only a tiny but non-zero percentage of non-black things were non-ravens, and therefore that predicting non-ravenhood from non-blackness would be very unreliable. 8 The predictions suggested by one formulation of a universal generalisation can also diverge from its contrapositive form when the universal generalisation is probable only because there is a high probability that it is vacuously satisfied. For instance, it is very likely that 'All planets made of pure platinum are exceptions to the laws of thermodynamics' is 8 An inferential policy can be reliable in some contexts and unreliable in others. I shall focus on general reliability in this article, as it is the sense that seems to be particularly important for the PR. 5 I am not arguing that these implicatures are part of the semantics of universal generalisations; in fact, as far as possible, I shall remain agnostic about the semantics of these sentences. 6 There is not always a contrast between the disjuncts, as opposed to some other propositions: Douven (2017Douven ( , 1543 gives the example of 'He walks slowly, but he walks '. 7 The hypothesis that it is reliable to predict instances of Y given instances of X in general does not entail that we should always do so. For instance, we know from the Problem of the Reference Class that the belief of a probabilistic statement about a population might be defeated, in some circumstances, by a belief about a subset of that population. I might believe that it is generally reliable to expect that someone who was awarded a PhD is alive today (due to the explosion of postgraduate studies in the past 70 years) but it would not be rational to have this expectation if I know that the PhD was awarded before 1930.
3
probably true, when this hypothesis is interpreted as a purely extensional hypothesis, but only because pure platinum planets are so improbable. Given our actual background information, we can reliably infer from 'This is not an exception to the laws of thermodynamics' to 'This is not a pure platinum planet', but we cannot reliably infer from 'This is a pure platinum planet' to 'This is an exception to the laws of thermodynamics'.
Another advantage of postulating a pragmatic connection between universal generalisations and predictions is that it provides a sense in which purely extensional generalisations like (1) 'All the coins in my pocket are pennies' and (2) 'All the coins in my pocket are not pennies' can be 'rivals', even though they are logically consistent according to standard contemporary semantics. Even if we say that they would both be true if my pockets are devoid of coins, we can note that their assertions recommend different predictions: my assertion of (1) would tend to make you expect that, if I reach to my pocket to take out some coins, they will be pennies, whereas my assertion of (2) would tend to make you expect that they will not be pennies. According to my suggested analysis of their pragmatics, hypotheses of the form 'All X are Y' and 'All X are ¬Y' are associated with incompatible predictions, even if they are logically compatible.
A third advantage is that, without trying to incorporate modality into the semantics of universal generalisations, we can do justice to this sort of observation: there seems to be something wrong with asserting that (χ) 'All people who sleep unprotected overnight on the Elephant's Foot in 2019 go on to live a further 10 years'. (The Elephant's Foot is an extremely radioactive fused blob of corium that was produced by the Chernobyl disaster in 1986. A few hours of exposure would be swiftly fatal.) The mere fact that it is almost certain that no-one will sleep overnight on the Elephant's Foot in 2019 seems insufficient for justifiably making such an assertion. 9 In contrast, asserting (η) 'All things that do not live a further 10 years after 2019 are not people who sleep overnight on the Elephant's Foot' would be rather awkward and not something that we would normally say, but asserting η would lack the strangeness of asserting χ. At least part of the contrast might be due to the fact that asserting χ suggests some potentially lethal predictions, whereas asserting η would presumably be useless, but would not recommend any unwise predictions.
Confirmation and Universal Generalisations
I shall now argue that the confirmation of universal generalisations is multifaceted: there is both confirmation simpliciter 10 and what I shall call predictive confirmation. This second form of favourable evidence occurs when the evidence both confirms simpliciter a universal generalisation and confirms the reliability of making its pragmatically associated predictions. Here is an informal definition of predictive confirmation that is not relativized to any particular confirmation theory:
Predictive Confirmation
E is predictive evidence for a universal generalisation of the form 'All X are Y' relative to B = df (1) E confirms 'All X are Y' relative to B and (2) E confirms the prediction that 1 3 Ya relative to (B ^ Xa), where the individual constant a refers to an otherwise unknown individual, 11 while Xa and Ya are the assertions that a satisfies the expressions X and Y respectively.
Thus, if E both confirms 'All ravens are black' in the simpliciter sense, given our background information, and E confirms the prediction that an unknown individual will be black, given our background information and the postulate that the individual is a raven, then E confirms 'All ravens are black' in the predictive sense of confirmation. If necessary, a could refer to a collection of objects (like a social group or class of chemical elements) rather than a particular individual.
Some clarifications are needed: firstly, I am not saying that we should actually infer Xa without evidence. The exercise of postulating the universal generalisation's antecedent is imaginative, not inferential: we should ask whether E would support the prediction that Ya if we knew that Xa. Secondly, on the nature of B: in simple cases, B is our relevant background information. In cases where Xa and the relevant background information are inconsistent, B is a set of statements matching our background information except with minimal modifications to achieve consistency with Xa. This clarification covers both mutual inconsistency and the case where our background information is internally inconsistent.
One might wonder why I include clause (1) in the definition. Carnap (1962, 572-573) defines a similar concept, which he calls "qualified-instance confirmation", and this concept is similar to predictive confirmation except for clause (1). 12 While predictive confirmation and qualified-instance confirmation are similar, they differ in a way that means that my concept avoids one criticism of Carnap's concept. As Gower (1997, 221) notes, a hypothesis can have increasing and/or high qualified-instance confirmation even if we accept a counterexample to it. While it is plausible that, if we discover that a large sample of white swans deep in the Amazon rainforest, this new information can confirm that 'All swans are white' is a reliable rule-of-thumb, it is not clear that there is a sense of 'evidence' in which this information can provide evidence (or 'confirmation') for the hypothesis is true. Carnap could answer Gower's criticism by saying that what he was trying to explicate was exactly this sense of a reliable rule-of-thumb. That response is plausible to me, but it highlights the difference between my explication and Carnap's: I am trying to explicate cases where people say that evidence does or does not provide evidence for a universal generalisation, rather than merely the reliability of the hypothesis as a rule-of-thumb. Nonetheless, I must acknowledge a debt of inspiration to Carnap; predictive confirmation could even be understood as confirmation simpliciter plus Carnap's qualified-instance induction.
I define predictive disconfirmation in an analogous way to predictive confirmation: E is predictive evidence against 'All X are Y' relative to B if and only if E disconfirms 'All X are Y' relative to B or E disconfirms the prediction that some unknown individual satisfies Y given B and the postulate that it satisfies X. However, I do not yet know of any cases in the philosophy of science where predictive disconfirmation is a useful concept; I define it for the sake of completeness. 11 In particular, a must not be mentioned in E or B. Put another way, what we know about a is only what we know about any arbitrary thing that satisfies X. Whether we imagine that a is some as-yet-unknown thing or some imagined addition to the universe does not seem to make a significant difference. The former seems more intuitive, whereas the latter is neater in the unusual cases where our background knowledge contains the information that we are already familiar with every instance of X, such as 'The ravens that I saw yesterday'. 12 See Zabell (2004, Sect. 6.1) for a contemporary discussion.
3
For example, assume that ∀(x)(X → Y) is an acceptable formalisation of 'All X are Y' and assume the adequacy of the standard Bayesian analysis of confirmation. (On the Bayesian analysis, confirmation is positive probabilistic relevance: E confirms H relative to B if and only if P(H | E ^ B) > P(H | B).) Given those assumptions.
Bayesian Predictive Confirmation
E predictively confirms a universal generalisation 'All X are Y' relative to background information B = df the following are both true: Informally, if (1) E confirms H given B and (2) E increases the probability of the prediction that some unknown individual satisfies F, given B and the postulate that the individual satisfies G, then E predictively confirms 'All X are Y'. 13 With these details filled in, it is possible to give a very simple Bayesian example of where predictive confirmation and confirmation simpliciter come apart. 14 Imagine that you are playing game with a friend where you can offer each other bets on the overall distribution of 'heads' and 'tails' in exactly 10 tosses of a two-sided coin that you both know to be fair. The bets can be offered at any time, though both players must accept them. Suppose that the coin has been tossed 5 times and landed 'heads' on each occasion. Knowing this information E provides you some evidence that 'All 10 coin tosses in the game will land heads' and thus makes it more rational to accept relatively poor odds that this universal generalisation is true. However, E does not provide predictive confirmation for the universal generalisation. The tosses are independent, and therefore if we suppose that some otherwise unspecified toss a is one of the remaining 5 tosses in the game, then the probability that a lands heads given E is the same as the prior probability of 0.5. Consequently, on a Bayesian identification of confirmation simpliciter with positive probabilistic relevance, the two concepts can come apart.
A fundamental difference between the standard Bayesian definition of confirmation (which I am not criticising as such) and predictive confirmation is that the Equivalence Condition holds for the former but not the latter. Consider clause (2) in the Bayesian definition of predictive confirmation. For 'All F are G' (using F and G to stand for some particular predicates) this clause requires that E confirms Ga given Fa and B. For 'All ¬G are ¬F', the clause requires that E confirms ¬Fa given ¬Ga and B. Yet there will be many circumstances in which E confirms Ga given Fa and B, but not ¬Fa given ¬Ga and B, or vice versa. I shall discuss a simple case in Sect. 4.2.
Predictive confirmation has advantages that are very similar to some of the considerations that I noted towards the end of Sect. 3.1. Firstly, since the same evidence cannot confirm both the prediction that Ya and the prediction that ¬Ya, relative to the same background knowledge and assumption of Xa, it follows that the same evidence cannot predictively confirm both that 'All X are Y' and that 'All X are ¬Y'. In this sense of 'evidence', 13 Note the similarity here to the Ramsey Test for conditionals (Ramsey 1990). In particular, clause (2) should be understood as similar to the hypothetical steps taken for simple conditionals on Ramsey's view. 14 I owe this example to an anonymous referee. See also an example by Dretske (1977).
there cannot be evidence that supports both 'All phlogiston is radioactive' and 'All phlogiston is not radioactive'. Secondly, evidence that no-one will visit the Elephant's Foot in 2019 confirms 'All people who sleep unprotected overnight on the Elephant's Foot in 2019 go on to live a further 10 years' in the simpliciter sense, but not in the predictive sense, because the evidence fails to confirm the prediction that a person who slept unprotected overnight on the Elephant's Foot in 2019 would go on to live a further 10 years and thus satisfy the second clause requirement for predictive confirmation. Finally, there is a significant sense in which hypotheses like 'All Higgs bosons are electrically charged' are genuinely about their antecedents, even if they are logically equivalent to hypotheses with different antecedents. This hypothesis is only predictively confirmed by evidence that favours the prediction that an unknown Higgs boson would be electrically charged, and such predictions are not contrapositable.
Although predictive confirmation is a novel idea, at least as I have defined it, and it does not seem to presuppose any particularly controversial theses in the philosophy of language, there are affinities between my notion and some recent work on conditionals. According to the inferentialist theory of conditionals, an utterance of 'If P, then Q' is true if and only if (1) P is evidentially relevant to Q given the utterer's background knowledge 15 and (2) P is consistent with those background knowledge or else is evidence for Q in the absence of relevant background beliefs (Krzyżanowska et al. 2013;2014;Douven 2017;Douven et al. 2018). This idea is not much younger than Western philosophy (something like it was apparently proposed by Chrysippus) but unlike some earlier versions of the same idea, the inferential connection does not have to be deductive. This approach is logically independent of my own: it is a thesis about the semantics of conditionals and inferentialists primarily discuss unquantified conditionals, whereas I am concerned with the pragmatics of universally quantified conditionals. However, we are motivated by similar problems and both make use of inferential notions in our analyses. Much of the evidence I cite for my hypotheses regarding confirmation could also be cited as evidence for the inferentialist analysis and vice versa. 16 It is perhaps already clear how predictive confirmation will help with the PR. Before going on make this point in detail, I shall close this section by emphasising that I think that both predictive confirmation and confirmation simpliciter are legitimate senses of the claim that a universal generalisation is confirmed. However, predictive confirmation is apparently the typical sense outside of formal epistemology.
Predictive Confirmation and the Paradox
Let us now return to the PR. Uncontroversially, evidence of ¬Y's that are ¬X need not confirm the prediction that Y is true of an unknown individual a, relative to some background information and the assumption that Xa, even though (by supposition) the evidence 1 3 confirms the prediction that ¬Xa, relative to that background information and the assumption that ¬Ya. Put simply, the same evidence might support the reliability of predictions from instances ¬Y to ¬X, without also supporting the reliability of predictions from instances from X to Y. Evidence that 'This non-raven is non-black' could predictively confirm 'All non-black things are non-ravens' without also predictively confirming 'All ravens are black'. The same is also true for evidence of black ravens, which could predictively confirm 'All ravens are black' and yet not predictively confirm 'All non-black things are nonravens'. In the predictive sense of confirmation, the puzzling PR scenario does not occur.
The PR is simply due to a misunderstanding: confirmation theorists have (justifiably) focused on confirmation simpliciter, but our talk about evidential relations is subtle and complex, and the ordinary way of interpreting assertions of the form 'E is evidence that all X are Y' is that they are claims about predictive confirmation. Thus, the claim 'Discovering the existence of my partner's pair of white shoes provides me evidence for the hypothesis that all ravens are black' ordinarily sounds like an assertion that is obviously false, assuming the implicit background information, because (in that context) white shoes provide no support for the prediction that an unknown raven would be black. Contrariwise, the claim can seem unparadoxical if one is sufficiently clear that confirmation simpliciter (as analysed by a theory like Hempel's or standard Bayesianism) is the subject of the assertion: for instance, there are probability distributions in which 'All ravens are black' is more probable relative to the total evidence after the discovery of some non-black non-ravens, so that arguably we can be more confident in the hypothesis. Once we have disambiguated terms such as 'evidence for' or 'confirms', we can see that there is a sense in which the commonsense intuitions truly apply and a sense in which they do not apply. It is the latter sense that confirmation theorists are focusing on, and thus there is no fundamental conflict, except among those who extend either sense to where they do not apply.
It is worth making some clarificatory points about my answer. Firstly, I am not claiming that natural language universal generalisations are consistent with counterexamples, nor am I claiming that they are really statistical generalisations or ambiguous with statistical generalisations. I have argued that an assertion such as 'All panther mushrooms are poisonous' and an assertion such as 'Almost all panther mushrooms are poisonous' have very similar pragmatic roles, but I am not claiming that their semantics are identical or even similar. Secondly, I am not denying that universal generalisations can be confirmed. To the contrary, my answer to the PR depends not only on the possibility that they can be confirmed in the simpliciter sense of standard confirmation theory, but also in the predictive sense.
Predictive confirmation can come from observing instances of a universal generalisation, as in the case of observing black ravens, but this is not the only possible source of predictive confirmation. A scientist might be investigating the hypothesis that 'All chromium vaporizes at approximately 347 kilojoules per mole under standard laboratory conditions', but she might not be in a position to accept that the subject and predicate terms of the hypothesis have been satisfied given her instrument's readings. Nonetheless, her evidence might confirm that a sample vaporized under those conditions, and with suitable background information thereby confirm the hypothesis.
Relative to some background information, it can be the case that statements of the form (¬Xb ^ ¬Xb) provide evidence for the prediction that Ya, given the postulate that Xa, such that it predictively confirms statements of the form 'All X are Y'. 17 For example, imagine 1 3 that you encounter an Amazonian tribe whose language is largely unknown to you. They seem to be either describing a white raven or a grey parrot, but the language barrier creates difficulties in interpreting their observation reports. This might provide you with evidence against the prediction that some unknown postulated raven (not the bird they are describing) is black. Suppose that, after clarification, you discover that they are referring to a grey parrot. You have learned that something is a non-black non-raven, and it is possible that discovering non-black non-ravens confirms 'All ravens are black' relative to your background knowledge, as in the standard PR scenario. However, it might also confirm the prediction that some unknown raven is black, because it might have seemed relatively likely that there was a white raven (disconfirming your belief of a 100% frequency of blackness in the set of raven) and this possibility was closed-off by discovering that the bird was a grey parrot. In the predictive sense, as well as the simpliciter sense, 'That parrot is grey' has confirmed that 'All ravens are black'. Such examples have become standard in the PR literature, and my analysis of predictive confirmation is consistent with their possibility.
Finally, one auxiliary advantage of predictive confirmation is that it provides a type of evidential support that vindicates the intuition that 'All ravens are black' and 'Just 99% of ravens are black' have similar sets of possible confirming evidence-statements. As I said in Sect. 2.2, statistical generalisations are not contrapositable, which is why the PR does not occur for them. Similarly, the conditional predictions ('Given X, expect Y') suggested by universal generalisations are not contrapositable. For predictive confirmation, there is a sense in which both hypotheses are about ravens, but this is due to the formulation of 'All ravens are black' and the pragmatics of this formulation, rather than the semantics of the hypothesis.
At the heart of my resolution is the fact that, while the Equivalence Condition (condition (2) in Sect. 2.1) is a very plausible criterion for any analysis of confirmation simpliciter, it need not be true for every sort of evidential relation. For predictive confirmation, the pragmatics of the confirmed or disconfirmed hypotheses are relevant to their relation towards the evidence, and two statements with the same semantics can differ in their pragmatics. Similarly, there is a sense in which it is strange to say that P confirms 'P but Q' relative to B, when no contrast between P and Q is suggested by either B or (P ^ B), even if P clearly confirms the logically equivalent 'P and Q' relative to B. There is no interpretation of the claims in Sect. 2.1 on which the Equivalence Condition is true and yet it is perplexing that reports of non-black non-ravens would confirm 'All ravens are black.'
A Probabilistic Illustration
My discussion in the preceding section was informal, and some readers might legitimately desire a formal illustration of how predictive confirmation avoids the PR. There are two points I shall make: firstly, that confirmation simpliciter and predictive confirmation can come apart; secondly, that predictive confirmation does not satisfy the Equivalence Condition, and therefore it is possible for reports of non-black non-ravens to predictively confirm 'All ravens are black', but not 'All non-black things are non-ravens'. Thus, in the set of Footnote 17 (continued) be taller than 12 feet tall, but if we discovered the skeletons of a group of prehistoric humans, among whom all the adults were between 11.5 feet tall and 11.9 feet tall, we have apparently confirmed that some human has been over 12 feet tall. This is because the height of our sample of skeletons indicates that there were some unsampled of this group who were over 12 feet tall.
1 3 claims I outlined in Sect. 2.1, they are all true for predictive confirmation except the Equivalence Condition; contrariwise, for confirmation simpliciter, reports of black ravens really do confirm 'All ravens are black' (given the implicit assumptions of the PR, the reports really do increase the probability of the hypothesis) and this only seems strange because we tend to refer to predictive confirmation when making evidential claims about universal generalisations.
I shall consider a very simple example with a very small domain, consisting of two objects a and b, characterised by two logically independent predicates S and G. My example below can be considered in the abstract, but if you would like to imagine circumstances where we would use such a probability model, imagine that a and b are two lottery balls that have just been drawn by a machine from a vat behind a screen. Let S and G be the predicates 'small' and 'green' respectively. Initially, you cannot see either ball, but you will first be shown ball b, and then shown ball a. You know some facts about the machine, which lead you to believe, in broad terms, that the features of b are a very good guide to the features of a when b is small and green. To a lesser extent, the features of b are a good guide when b is small and not green. Otherwise, the features of b are not helpful. Let B be your relevant background knowledge. To simplify, imagine that P(B) = 1. In detail, suppose that your background information results in the following probabilities for the possible circumstances: I shall begin by demonstrating that statement (¬Sb ^ ¬Gb) confirms simpliciter 'All S are G' in this probability distribution. Let H be 'All S are G'. The main intuition behind the calculations in this paragraph is that if (¬Sb ^ ¬Gb ^ B) is true, then there are four equiprobable cases; in three of them, H is true; and this exceeds the probability of H given B alone. Firstly, since P(B) = 1, it follows that P(H | B) = P(H), and this is the probability that everything is ¬S or G. That is equal to the sum of the probabilities in (1), (2), (4), (6), (7), (11), (13), (15), and (16), which is 23 32 = 0.71875. Secondly, the probability of (¬Sb ^ ¬Sb ^ B) is the sum of the probabilities in (11), (12), (13), and (16), which is 4 32 . Finally, (H ^ ¬Sb ^ ¬Gb ^ B) is true in the possibilities in (11), (13), and (16), whose probabilities sum to 3 32 . The conditional probability of H given (¬Sb ^ ¬Gb ^ B) is P(H | ¬Sb ^ ¬Gb ^ B) = Since this probability is greater than P(H | B) = 0.71875, it follows that (¬Sb ^ ¬Gb) confirms simpliciter H relative to B.
Yet (¬Sb ^ ¬Gb) does not predictively confirm H relative to B. The key feature of the probability distribution behind the calculations in this paragraph is that, given B and the assumption of Sa, the prediction of Ga is initially somewhat more likely than not; however, learning (¬Sb ^ ¬Gb) reduces the possibilities to two equiprobable cases, and Ga is only true in one of these, so that Ga is no longer more likely than not. Firstly, the conditional probability of Ga given (B ^ Sa) is the sum of the probabilities in (1), (4), (5), and (11), which is 18 32 = 0.5625. Secondly, P(¬Sb ^ ¬Gb ^ B ^ Sa) is the sum of the probabilities in (11) and (12), which is 2 32 . Finally, the value of P(Ga ^ ¬Sb ^ ¬Gb ^ B ^ Sa) is given in (11), which is 1 32 . Therefore, P(Ga | ¬Sb ^ ¬Ga ^ B ^ Sa) = 2∕32 = 1 2 = 0.5, which is less than P(Ga | B ^ Sa) = 0.5625. Far from confirming the prediction in question, (¬Sb ^ ¬Gb) disconfirms it.
One might worry that this might be an excessively peculiar probability distribution. In particular, one might wonder if this is a probability distribution in which (Sb ^ Gb) does not predictively confirm H, so that it is not a 'normal' inductive probability distribution. One could then worry that, even though what I have said in the previous paragraphs is true, I have not proven that my points could hold when 'All ravens are black' is confirmed by discovering black ravens. This worry is unfounded, because (Sb ^ Gb) does predictively confirm H in this probability distribution. The basic idea is that (Sb ^ Gb) is antecedently expected to be a very good indicator of the features of a, and if this indication is correct, then H is true. Firstly, P(H ^ Sb ^ Gb ^ B) is the sum of the probabilities in (1), (2), and (6), which is 17 32 . Secondly, P(Sb ^ Gb ^ B) is the sum of the probabilities in (1), (2) . In this probability distribution, learning (Sb ^ Gb) provides much stronger confirmation simpliciter for H than learning (¬Sb ^ ¬Gb). It also provides the predictive component of predictive confirmation. As noted in the previous paragraph, P(Ga | B ^ Sa) = 0.5625. The value of P(Sb ^ Gb ^ B ^ Sa) is the sum of the probabilities in (1) and (3), which is 16 32 . Finally, P(Ga ^ Sb ^ Gb ^ B ^ Sa) is given in (1), which is 15 32 . Therefore, P(Ga | Sb ^ Gb ^ B ^ Sa) = 16∕32 = 15 16 = 0.9375, which is greater than P(Ga | B ^ Sa) = 0.5625. Thus, (Sb ^ Gb) confirms Ga relative to B and the assumption that Sa, and thereby satisfies the predictive component of predictive confirmation as well as the confirmation simpliciter component.
To close, I shall use this probability distribution to exemplify one of my key claims: that predictive confirmation does not satisfy the Equivalence Condition. We have already seen that (¬Sb ^ ¬Gb) provides confirmation simpliciter for 'All ¬G are ¬S', since it provides confirmation simpliciter for the logically equivalent 'All G are S' and Bayesian confirmation satisfies the Equivalence Condition. Now, I need to prove that it also provides the predictive component. Firstly, P(¬Sa | B ^ ¬Ga) is the sum of the probabilities in (6), (14), (15), and (16), which is 4 32 = 0.125. Secondly, P(¬Sb ^ ¬Gb ^ B ^ ¬Ga) is the sum of the probabilities in (12) and (16), which is 2 32 . Finally, P(¬Sa ^ ¬Sb ^ ¬Gb ^ B ^ ¬Ga) is the probability in (16), which is 1 32 . Therefore, P(¬Sa | ¬Sb ^ ¬Gb ^ B ^ ¬Ga) = , which is greater than P(¬Sa | B ^ ¬Ga) = 0.125. Thus we can see that, in the probability distribution that I have described, (¬Sb ^ ¬Ga) predictively confirms 'All ¬G are ¬S', even though it does not predictively confirm the logically equivalent hypothesis 'All S are G', because they are pragmatically associated with different predictions. Although confirmation simpliciter satisfies the Equivalence Condition, this probability distribution illustrates how predictive confirmation does not, and this is the principal formal feature of predictive confirmation that I need for my answer to the PR.
Of course, this distribution concerns a very artificial case, though its simplicity and the choice of probabilities makes easy to see the precise probabilities in question. For a more realistic example, consider the Elephant's Foot hypothesis that I discussed earlier: it would be misleading, in normal circumstances, to say (Φ) 'The recent discovery of that galaxy is evidence that all people who sleep overnight on the Elephant's Foot in 2019 live a further 1 3 10 years'. On the Bayesian version of my answer to the PR, the statement Φ is misleading because it suggests to the listener that the astronomical discovery makes the prediction that if someone did sleep overnight on the Elephant's Foot in 2019, they would live a further 10 years, into a more probable prediction. Clearly, this probabilistic relation does not hold for our actual credences. Thus, Φ sounds like it is about predictive confirmation, when actually Φ is at best only true for confirmation simpliciter.
In the ravens case, the same sorts of considerations apply, even though the domain is obviously far larger than in my toy example. Suppose that evidence of non-black nonravens decreases the probability that objects are ravens and that this decrease overpowers the effect of increasing the probability that objects are non-black. (This latter effect could be very small.) It will then confirm 'All ravens are black'. Unlike the case of black ravens, the confirmation comes from providing evidence that ravens are rare, rather than providing evidence that the relative frequency of blackness among ravens is 100%. However, suppose also that it increases the conditional probability, given our background knowledge and the assumption that some object is a raven, that the raven will not be black. This is possible, because the increase in probability that objects are non-black will still be present, but the decrease in the probability of ravens will no longer apply. In other words, the evidence has increased the probability that if the postulated object was a raven, then it would be nonblack. The evidence therefore does not satisfy the predictive component of predictive confirmation, and thus does not predictively confirm 'All ravens are black'. 18 In either toy examples or more realistic cases, Bayesians can disentangle these two types of evidence and accommodate both the commonsense intuitions in the PR and those philosophers who have been led, by their analyses of confirmation simpliciter, to accept what seem to be the opposite of the commonsense intuitions.
Comparison with Alternatives
Firstly, unlike approach (A), my answer requires no clash with confirmation theorists like Hempel and most Bayesians. Of course, if the latter group were to insist that confirmation simpliciter was the only legitimate sense of terms like 'is evidence for', then there would be a clash. However, I know of no reason why confirmation must be a unitary concept in natural language. It would be convenient, but that is no reason to deny predictive confirmation, because natural language is not obliged to be philosophically convenient. Perhaps natural kinds and degrees of naturalness are essential parts of the philosophy of evidence, but if my answer is correct, they are inessential to resolving the PR. Finally, my answer does not require that, in the PR scenario, the evidence fails to probabilify (or otherwise confirm simpliciter) 'All ravens are black'.
Unlike approach (B) in Sect. 2.2, my answer does not entail any mistake by those who believe (4) in Sect. 2.1. They are not ignoring very small degrees of confirmation simpliciter for 'All ravens are black', while also somehow not ignoring small degrees of confirmation simpliciter for the logically equivalent 'All non-black things are non-ravens'. Their intuitions are not about confirmation simpliciter at all, except if they extend them beyond predictive confirmation to where they do not belong. Since people's intuitions are fine, when in their proper place, there is nothing to explain away by reference to degrees of 1 3 confirmation. My answer is also consistent with the intuitions of those who find it puzzling that a black raven could confirm 'All non-black things are non-ravens'. As for large numbers of non-black non-ravens, they will provide no confirmation in the predictive sense for 'All ravens are black' unless they combine with the relevant background knowledge to confirm the prediction that an otherwise unknown raven would be black.
My answer falls under approach (C) in my classification of answers in Sect. 2.3. We agree that there is a sense of 'evidence' where (under typical background assumptions) reports of black ravens provide evidence for 'All ravens are black' but non-black nonravens do not. We also agree that the PR is fundamentally the product of the ambiguity of the notion of 'evidence', and that the paradox is dissolved once we clarify the different senses of this notion. Yet, unlike every version of this approach that I have found, my answer does not appeal to a selective sense of confirmation. This has been the rock upon which, arguably, every existing version of (C) has crashed. Therefore, there is no need to find an explication for the concept of rivalrous hypotheses that enables us to reduce the PR to a mere misunderstanding, because predictive confirmation can do the same job. Nonetheless, I must acknowledge inspiration from selective confirmation theorists like Goodman and Glymour. Additionally, predictive evidence is 'selective' in a different sense of the term, because the same evidence cannot (in normal contexts) predictively confirm both 'All X are Y' and 'All X are ¬Y' because these hypotheses are associated with incompatible predictions: the first hypothesis is pragmatically associated with expecting that a satisfies Y, given the postulate that it satisfies X, whereas the second hypothesis is pragmatically associated with excepting that a satisfies ¬Y. Thus, predictive evidence selects among two contradictory predictions for an otherwise unknown individual a. Among their other merits, earlier versions of (C) were tantalizingly close to my answer.
Objections
One might wonder if my answer depends on idiosyncrasies of Bayesianism. In fact, my answer can be adapted to a variety of theories of evidence, provided that they can handle predictions in a satisfactory way. Hempel's own system struggles in this regard (Hooker 1968) but my answer also works in Henry Kyburg's system of "Evidential Probability" (Kyburg and Teng 2001). Unlike Bayesianism, this is a system of hypothesis acceptance and rejection, in which the fundamental core is a set of purely syntactic rules that govern the inference of hypotheses about relative frequencies. Firstly, according to Kyburg's theory, universal generalisations of the form 'All X are Y' can be supported both by confirming that the frequency of Y among X's is high or that the frequency of ¬X among ¬Y is high. 19 Secondly, singular predictions in Kyburg's theory become more probable by accepting imprecise statistical generalisations given one's evidence. In particular, if E confirms Ya given (Xa ^ B), then E must confirm the hypothesis of a high relative frequency of Y in at least one reference class that we believe to contain a. Kyburg proposes various rules for determining which reference class(es) will be relevant, but for my definition we only assume that a is a member of the reference class of X's. Therefore, it is only the relative 1 3 frequency of Y in X's that is relevant. It is possible that evidence of non-black non-ravens might confirm simpliciter that 'All ravens are black', but only by supporting the statistical generalisation that the relative frequency of non-ravens among non-black things is high, rather than by confirming that the relative frequency of blackness among ravens is high. In Evidential Probability, as in Bayesianism, there can be divergences between confirmation simpliciter and the predictive component of predictive confirmation. My answer is not just an option for Bayesians.
In his discussion of the PR, Hempel considers and criticises an answer that is superficially similar to my own (Hempel 1945, 17-18). On that answer, the hypothesis 'All X are Y' has an implicit range of relevance, which is restricted to those things satisfying the expression X, and only instances in this range will confirm the hypothesis. I agree with Hempel that this is a mistake: it "involves a confusion of logical and practical considerations" (Hempel 1945, 18). The semantics of 'All ravens are black' has nothing particularly to do with ravens. However, that point is compatible with what I have said about predictive confirmation, where practical considerations have an independent and indispensable role. Therefore, it is unsurprising that the arguments that Hempel makes against the range of relevance answer do not apply to my answer. Firstly, he notes that scientists never make this range of relevance explicit, but on my answer the scope of the predictions associated with a universal generalisation (not the hypothesis itself) is suggested precisely by the choice of how one formulates the hypothesis: 'All X are Y' versus 'All ¬Y are ¬X'. Secondly, Hempel points out that there are commonplace logical operations (for instance, contraposition) which require that hypotheses of the forms 'All X are Y' and 'All ¬Y are ¬X' have the same truth-conditions, but the range of relevance answer trades on distinct semantics for such hypotheses. In contrast, I have not denied that universal generalisations are contrapositable, but instead claimed that (in some circumstances) evidence for the reliability of one contrapositive's associated predictions might not be evidence for the reliability of the other contrapositive's associated predictions. Since this association is pragmatic, rather than semantic, it does not require a difference of truth-conditions. My answer allows that 'All ¬Y are ¬X' can be confirmed by evidence that instances of ¬Y are ¬X. Thus, 'All non-black things are non-ravens' can be confirmed, in the predictive sense, by a report of a non-black non-raven. Kyburg objects to confirmation theories with this feature, because scientists do not test hypotheses like 'All non-black things are non-ravens' by investigating the proportion of non-black things among non-ravens (Kyburg 1968, 309). Certainly, ornithologists do not test such hypotheses-but nor do they go around looking for black ravens to test 'All ravens are black'. Whether a rational scientist is interested in testing a hypothesis depends on a wide variety of factors, including the cost of testing, the probability of the hypothesis given the background evidence, its anticipated explanatory benefits, its expected technological utility, and so on. Therefore, it is possible that a particular hypothesis is testable, even though it would be silly to expend resources on testing it. 'All non-black things are non-ravens' could be such a hypothesis.
A different line of criticism could be made against the usefulness of predictive evidence. Why do I need to keep track of the reliability of making the predictions associated with a universal generalisation, given that the universal generalisation is wellconfirmed? If I strongly believe that 'All ravens are black', then of course I also do not strongly believe that there are any non-black ravens. Surely universal generalisations can do all the necessary work; all my talk of 'the reliability of making the predictions associated with a universal generalisation' is redundant. I have two principal responses to this criticism: firstly, keeping track of the reliability of different predictive policies has a useful function of epistemic hygiene. In cases where 'All X are Y' is supported by my evidence because I have good evidence that nothing satisfies X, I might forget that the reason I believe this hypothesis is not a predictively useful observed or hypothetical connection between X and Y given my total evidence, but simply because I had reasons to think that the hypothesis is vacuously satisfied. Keeping track of whether hypotheses are predictively confirmed, rather than merely confirmed, can help avoid such confusions. Douven (2008, 24) makes a similar point regarding the role of epistemic hygiene for the acceptability of conditionals. Secondly, recognising and retaining the reliability of different predictive policies helps prepare us for inferences after the loss of the universal generalisation: 'All mammals do not lay eggs' is no longer consistent with our evidence, but it is still a good rule-of-thumb, whereas 'All Presidents of the United States of America are men' will not be a good rule-of-thumb after there is a counterexample. For ideally rational agents, such advanced preparations for forgetfulness and rules of thumb are perhaps not important, but for flesh-and-blood humans, they are an inescapable part of our everyday reasoning.
My answer to the PR implies that confirmation is not a unitary concept, which might seem objectionable on grounds of complexity. However, there is precedent for taking confirmation to be ambiguous between multiple notions. For instance, Carnap (1962, xvi) distinguished between a variety of different sorts of confirmation, including both (1) whether a statement E increased the "firmness" of a hypothesis H given the relevant background information B and (2) whether H was "firm" on E and B. Another precedent of a non-unitary analysis of confirmation is that Joyce (2004, 144-145) uses a non-unitary analysis of confirmation to develop an intriguing answer to the Problem of Old Evidence for Bayesian epistemology. Simplicity can be sacrificed when there is a sufficient explicatory pay-off.
Finally, my answer to the PR is fundamentally empirical: the paradox is a product of an ambiguity in natural language. Yet I have only supported my answer through stylised facts associated with the PR and similar qualitative peculiarities concerning the analysis of universal generalisations. Therefore, one might reasonably worry that my answer is ad hoc. I have no novel evidence for my claims, but I can propose some experimental predictions. To begin, one would start by checking if each individual subject accepts the Scientific Laws Condition. Secondly, one would present the ravens hypothesis in a form that lacks the pragmatics that I have suggested are associated with universal generalisations in natural language, such as 'Everything is a non-raven or black or both', and check if the subjects understand the truth conditions of this sentence. (Given the doubtfully empirical status of 'All ravens are black', it might be preferable to use a hypothesis like 'All panther mushrooms are poisonous' and 'Everything is a non-panther mushroom or poisonous or both'.) Finally, one could test to see if the PR survives the transformation: do people still find it counterintuitive that a non-black non-raven could be evidence for the hypothesis? My answer predicts that people would become comfortable with this possibility. A further prediction is that people who are troubled by a non-black non-raven confirming 'All ravens are black' will nonetheless generally be comfortable with the notion of such evidence confirming 'All non-black things are non-ravens', even though these generalisations are logically equivalent and have the same degrees of confirmation given the evidence. If I am correct, then a non-black non-raven does confirm 'All non-black things are non-ravens' relative to the implicit background information and confirms the reliability of making the predictions associated with it, and therefore I would expect that people generally do not find this paradoxical. I have no expertise in psychological testing, but it does seem that my explanation is testable and has some novel predictions. Still, I accept that it is sufficiently ad hoc to warrant significant scepticism, at least until we have tested its predictions beyond mere stylized facts and appeals to intuitions.
Conclusion
Our hesitance to say that reports of white shoes confirm that 'All ravens are black' is the product of an ambiguity. Once we disambiguate 'confirms' between confirmation simpliciter and predictive confirmation, we can happily say that (given certain background assumptions) we have confirmation simpliciter but not predictive confirmation for this hypothesis. Ordinary language often conflates these two types of evidence, yet formal explications of evidence are free to provide greater precision that can remove such paradoxes of ambiguity.
Critics of induction like Feyerabend (1968) and Popper (1974, 991) have used the PR to ridicule the notion of inductive reasoning. My answer implies that the paradox reveals no problems with induction at all. Abstracting from our ordinary inductive concepts can lead us astray if we fail to recognise what we are doing. This need for caution neither implies a problem for induction, nor a deep problem for abstract approaches to confirmation theory. I think that formally-orientated confirmation theory is perhaps the most successful research programme in all of philosophy, but we leave ourselves open to spurious paradoxes if we misunderstand the focus of this research. Pragmatics and formal analyses of confirmation theory can profitably travel together. | 14,816.2 | 2019-12-27T00:00:00.000 | [
"Philosophy"
] |
A FOURTH ORDER IMPLICIT SYMMETRIC AND SYMPLECTIC EXPONENTIALLY FITTED RUNGE-KUTTA-NYSTR¨OM METHOD FOR SOLVING OSCILLATORY PROBLEMS
. In this paper, we derive an implicit symmetric, symplectic and exponentially fitted Runge-Kutta-Nystr¨om (ISSEFRKN) method. The new integrator ISSEFRKN2 is of fourth order and integrates exactly differential systems whose solutions can be expressed as linear combinations of functions from the set { exp( λt ) , exp( − λt ) | λ ∈ C } , or equivalently { sin( ωt ) , cos( ωt ) | λ = iω, ω ∈ R } . We analysis the periodicity stability of the derived method ISSEFRKN2. Some the existing implicit RKN methods in the literature are used to compare with ISSEFRKN2 for several oscillatory problems. Numerical results show that the method ISSEFRKN2 possess a more accuracy among them.
1. Introduction. In this paper we focus on the initial value problems (IVP) related to a system of second-order ODEs of the form y = f (x, y), y(x 0 ) = y 0 , y (x 0 ) = y 0 , x ∈ [0, x end ], (1) whose solutions exhibit an oscillatory character. Problems of this type are of great interest in applied sciences such as molecular dynamics, orbital mechanics, and electronics. High accuracy of integration is often required in these areas. Until now there are broadly two categories of approaches to numerical integration of the IVP (1): indirect and direct. On one hand, if a new variable u is introduced to represent the first derivative y , then IVP (1) is turned into the partitioned system of first order equations y = u, u = f (x, y), y(x 0 ) = y 0 , u(x 0 ) = y 0 , and the problem can be solved by the general Runge-Kutta (RK) methods or partitioned Runge-Kutta (PRK) methods (see Refs. [16,4,17,5,22,23,6]). On the a ij f (t 0 + c j h, Y j ), i = 1, · · · , s, which can be expressed in the Butcher tableau as c s 1 1 a s1 · · · a ss 1 1b 1 · · ·b s 1 b 1 · · · b s The objective of this section is to specify when the RKN method (3) is symmetric, symplectic and exponentially fitted. This is the cornerstone of our paper. In the following subsections, we will put forward these three important properties step by step.
2.1. Symmetry conditions. The key to understanding symmetry is the concept of the adjoint method. We denote a one-step method for second-order ODEs (1) as Φ h : (y 0 , y 0 ) T → (y 1 , y 1 ) T . Here, from y 0 to y 1 , the variable goes forward with a step h. Then the symmetry of Φ h is defined as follows.
Definition 2.1. The adjoint method Φ * h of a one-step method Φ h is the inverse map of the original method with reversed time step −h, i.e., Φ * h := Φ −1 −h . In other words, In the case of the s-stage RKN method (3), a set of sufficient conditions for the symmetric method are given by In this paper we consider the method (3) whose coefficients are z-dependent, as we do for exponentially fitted type methods (see Refs. [22,23,25]). Then the method (3) is symmetric if its coefficients satisfy the following conditions where z = iωh, ω is the principal frequency of the problem. We assume that the coefficients of the method (3) are even functions of h, as we frequently encounter in the case of EFRKN methods, so that these conditions reduce to the conditions (4).
2.2.
Symplectic conditions. Now, we turn to the symplectic conditions for the scheme (3). Symplecticity is defined for a Hamiltonian system. On many occasions, the problem under consideration takes the form of a Hamiltonian systeṁ where S is a symmetric positive definite constant matrix. This system is equivalent to the second-order equation (1) with f (x, q) = −S −1 ∂ ∂q U (x, q). The following definition can be found in [8]. Here we only list it without explanation.
Definition 2.2. A one-step method is symplectic if for every smooth Hamiltonian function H and for every step size h, the corresponding flow preserves the differential 2-form Accordingly, the scheme (3) for the problem (1) is symplectic if and only if For the left side of this equation, we have Eliminating dy 0 in the second term of this equation by inserting Eq.(3) we obtain Therefore, Eq. (5) holds if the following conditions are satisfied 2.3. Exponential fitting conditions. Following Albrecht's approach (see Refs. [1,2]), each stage of the scheme (3) can be viewed as a linear multistep method on a non-equidistant grid. With each stage one can associate a linear functions as follows: • for the internal stages, a ij y (x + c j h), i = 1, 2, · · · , s; • for the final stages, By requiring the internal and final stages vanish for the functions from the set {exp(±iωx)} leads to the following equations Note that cosh(z) = (e z + e −z )/2 and sinh(z) = (e z − e −z )/2, then the equations (7) imply that In this paper, we call the method (3) satisfied the exponentially fitted (EF) conditions (8) and (9) as exponentially fitted RKN (EFRKN) method.
3. Algebraic order conditions. In this section, we will present algebraic order conditions for exponentially fitted Runge-Kutta-Nystöm (EFRKN) methods. For an EFRKN method, the local truncation errors in the approximations of its solution and its derivative can be expressed as where F (j) (y 0 ) denotes an elementary differential and the terms d or equivalently, Since our RKN method is a particular case of the RKN method considered in [7], following the approach in [7] we consider the following assumptions The order conditions up to fifth order for the RKN method (3) are the following ones: Order 1 requires: Order 2 requires in addition: Order 3 requires in addition: Order 4 requires in addition: Order 5 requires in addition: From Theorem 2.1 in [7], we know that the EFRKN method (3) has algebraic order at least 2.
4. Construction of implicit symmetric symplectic EFRKN method. In this section we construct implicit EFRKN method under the symmetry, symplecticity and exponential fitting conditions obtained in the previous section.
WENJUAN ZHAI AND BINGZHEN CHEN
Until now, we obtain an implicit symmetric and symplectic exponentially fitted Runge-Kutta-Nyström method which coefficients are given by We denote this method as ISSEFRKN2. In order to specify the algebra order of ISSEFRKN2, we give the Taylor expansions of the coefficients.
From the taylor expansions, we can verify that our method ISSEFRKN2 satisfies algebraic conditions up to fourth order, but doesn't satisfy the fifth order condition d i c 4 i − 1 5 = 0. So, the method ISSEFRKN2 is of order 4. The method ISSEFRKN2 is exponentially fitted. So when the solution of (1) can be expressed as linear combinations of functions from the set {exp(±iωx)}, ISSEFRKN2 has higher efficiency and competence than other integrators which are not exponentially fitted. This will be shown in the numerical studies. 5. Periodicity region of the new method. Now we start to analyze the stability property of our new method. Stability means that the numerical solutions remain bounded as we move further away from the starting point. For classical RKN methods, the stability properties are checked using the second order linear test model Recall that the new symmetric and symplectic exponentially fitted implicit RKN method derived in the previous section is dependent on the complex number λ = iω, where ω > 0 is an estimate of the dominant frequency. Applying an s-stage ISSEFRKN method (3) to the test model (19) yields where The stability behavior of the numerical solution depends on the eigenvalues or the spectrum of the stability matrix M = M (H 2 , ν 2 ). Eliminating y 0 and y 1 from (20) and the equation that is obtained from (20) by replacing the subscript 0 by 1 gives the difference equation Accordingly, the characteristic equation is given by (iii) If R s = (0, ∞) × (0, ∞) except possibly for a discrete set of curves, the method is A-stable; (iv) If R p = (0, ∞) × (0, ∞) except possibly for a discrete set of curves, the method is P-stable.
The periodicity region of the method ISSEFRKN2 is depicted in Figure 1. . This method is not exponentially fitted. • ISSEFRKN2: The symmetric and symplectic exponentially fitted two-stage fourth-order RKN method (18) proposed in this paper.
Compared with our method ISSEFRKN2, the method DIRKNRaed or DIRKNNora is neither symmetric, symplectic nor EF, ISSRKN2 is not EF. In our numerical experiments we have solved the non-linear equations with the Newton iteration method and taking initial values Y . The iteration is carried out until the difference between the Euclidean norm of two successive iterations attains 10 −8 . The maximum number of iterations is 1000.
The criterion used in the numerical comparisons is the usual test based on computing the maximum global error in the solution over the whole integration interval. In Figures 2-6 we show the decimal logarithm of the maximum global error (log10(err)) versus the number of steps required by each code on a logarithmic scale (log10(nsteps)). All computations are carried out in double precision arithmetic (16 significant digits of accuracy). Problem 1. We consider the linear problem with variable coefficients y + 4x 2 y = (4x 2 − ω 2 ) sin(ωx) − 2 sin(x 2 ), x ∈ [0, x end ] y(0) = 1, y (0) = ω, whose analytic solution is given by y(x) = sin(ωx) + cos(x 2 ). This solution represents a periodic motion that involves a constant frequency and a variable frequency. In our test we choose the parameter values ω = 10, λ = 10i, x end = 10, h = 1/2 m , m = 4, 5, 6, 7, and the numerical results are stated in Figure 2.
As we can see, its solution is independent on µ.
In this problem, the parameters are chosen as µ = 0.25, λ = √ µi = 0.5i, x end = 10, and the numerical results presented in Figure 6 have been computed with the integration steps h = 1/2 m , m = 1, 2, 3, 4. From Figures 2-6, we can find that symmetric and symplectic method ISSRKN2 is more efficient than the nonsymmetric or nonsymplectic methods. But, all of them are not better than the exponentially fitted RKN method ISSEFRKN2 when the exact solutions can be expressed in term of triangular functions. 7. Conclusions. In this paper a two-stage IEFRKN integrator which is symmetric and symplectic have been derived. Like the existing EFRKN integrators (see [25] for example), the coefficients of the new method depend on the product of the dominant frequency ω and the step size h. When the parameter z(= ωh) approaches to zero, the ISSEFRKN method reduces to the classical RKN method. The numerical experiments carried out show that the new method is more efficient than the twostage classical symmetric and symplectic RKN integrator and other RKN methods used in the numerical studies. | 2,650.2 | 2019-01-01T00:00:00.000 | [
"Mathematics"
] |
Pyk2 Amplifies Epidermal Growth Factor and c-Src-induced Stat3 Activation*
Signal transducers and activators of transcription factors (STATs) mediate many of the cellular responses that occur following cytokine, growth factor, and hor-mone signaling. STATs are activated by tyrosine and serine phosphorylation, which normally occurs as a tightly regulated process. Dysregulated STAT activity may facilitate oncogenesis, as constitutively activated STATs have been found in many human tumors as well as in v-abl- and v-src-transformed cell lines. Pyk2 is a member of the focal adhesion kinase family and can be activated by c-Src, epidermal growth factor receptor (EGFR), Janus kinase 1, tyrosine kinases, and G-protein-coupled receptor signaling. Although Pyk2 has been implicated in Janus kinase-dependent activation of MAPK and Stat1, no role for Pyk2 in the activation of other STAT proteins has been ascribed. Here, we provide evidence that Pyk2, along with c-Src, facilitates EGFR-me-diated Stat3 activation. Pyk2 expression in HeLa cells induces Stat3 reporter gene activation and Stat3 phosphorylation on amino acid residues Tyr-705 and Ser-727. Together Pyk2 and c-Src potently activate Stat3, and Pyk2 enhances Stat3-induced cell proliferation. More-over, the expression of a dominant negative version of Pyk2 impairs c-Src-induced Stat3 activation and cell proliferation. The treatment of A431 cells with EGF results in the recruitment of c-Src, Pyk2, and Stat3 to the EGFR and the phosphorylation of c-Src, Pyk2, and Stat3. Expression of constructs for dominant negative forms of either Pyk2 or c-Src impair EGF-induced Stat3 phosphorylation. These results indicate that Pyk2 facilitates EGFR- and c-Src-mediated Stat3 activation, thereby implicating Pyk2 activation as a potential co-mediator in triggering Stat3-induced oncogenesis.
Stat3 was first described as an IL-6-inducible 1 DNA binding activity reactive with the acute phase response element (1)(2)(3). Molecular characterization led to its identification as a STAT protein and the demonstration that not only IL-6 but also other cytokines (which use gp130 as a signal transducer) potently induced its activity (4). Subsequent studies revealed that other agents, including growth factors, interferons, and oncoproteins, also activate Stat3 (5)(6)(7). Ablation of the Stat3 locus in mice led to an early embryonic lethality complicating the assignment of its precise biologic role (8). The analysis of mice in which Stat3 has been disrupted in various adult tissues has led to a recognition that Stat3 participates in a diverse set of cellular responses. These include the migration of keratinocytes (9), the survival of thymic epithelial cells (10), IL-2R␣ expression on T lymphocytes (11), apoptosis in the mammary gland epithelium (12), modulation of inflammation (13), the induction of the acute phase response in the liver (14), and the survival of sensory and motor neurons (15,16). Despite the lack of a clear molecular understanding of the roles of Stat3 in embryonic and even adult tissues, Stat3 has emerged as a critical mediator in the pathogenesis of a variety of human cancers.
Evidence for the role of Stat3 in human cancer includes the following observations (reviewed in Refs. [17][18][19]. First, constitutively active forms of Stat3 can induce partial cellular transformation. Second, Stat3 is activated by oncogenic tyrosine kinases including v-Src and bcr-abl. Third, dominant negative forms of Stat3 can block cellular transformation induced by these oncogenic tyrosine kinases. Fourth, Stat3 activation leads to the activation of target genes involved in cell proliferation and survival implicating its essential pathways involved in oncogenesis. Fifth, activated Stat3 has been found in human malignancies. v-Src is a potent oncoprotein, and the activation of Stat3 is critical for its transforming ability (6). In addition, c-Src links IL-3 receptor (21), platelet-derived growth factor receptor (22), epidermal growth factor receptor (EGFR) (3), and angiotensin II AT1 receptor (23) signaling to Stat3 activation. However, the mechanism by which c-Src activation leads to Stat3 activation remains unclear. c-Src family SH3 domains have been reported to directly interact with Stat3, leading to Stat3 tyrosine phosphorylation (24). Another study implicated Etk, a Tec family tyrosine kinase, as an intermediary in v-Src-induced Stat3 activation and transformation (25). Etk is expressed in a variety of tissues including hematopoietic, epithelial, and endothelial cells. Besides linking v-Src to Stat3 activation, Etk participates in IL-6-induced differentiation of prostate cancer cells (26), functions as an intermediary in G␣12/13-induced activation of serum response factor (27), and mediates cell motility in signaling pathways that become activated upon integrin-triggered cell adhesion (28). We have reported previously that the proline-rich tyrosine kinase Pyk2 processes similar upstream information and coordinates the activation of similar downstream signaling pathways, as do the Tec kinases (29). Furthermore, both Tec kinases and Pyk2 participate in cell migration (28,30,31).
Pyk2 and focal adhesion kinase (Fak) are members of a distinct family of nonreceptor protein tyrosine kinases that are * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
Based on the overlapping functional roles of the Tec family kinases and Pyk2 and because of the established role of Pyk2 in Stat1 activation and the known role of c-Src in Pyk2 activation, we investigated whether Pyk2 participated in c-Src-and EGFmediated Stat3 activation. We report that Pyk2 facilitates c-Src-mediated Stat3 activation and participates in EGF receptor signaling to Stat3 activation.
Transfections and Reporter Gene Assays-HeLa and A431 cells were obtained from the American Type Culture Collection (Manassas, VA). The cells were maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum and transfected using Super-Fect (Qiagen Inc., Valencia, CA) in 6-well plates following the manufacturer's protocol. The collected cells were lysed in 200 l of reporter lysis buffer (Promega, Madison, WI) for 30 min on ice. After centrifugation, 20 l of the supernatant was tested for -galactosidase activity, using galactan chemiluminescent substrate (Tropix, Bedford, MA), or luciferase activity, using a luciferase substrate (Promega). Data from all the transfection assays were normalized using the activity of a control reporter gene.
Immunoblotting and Immunoprecipitations-HeLa cell lysates were prepared using reporter lysis buffer (Promega) for 30 min on ice. The detergent-insoluble material was removed by centrifugation for 10 min at 14,000 rpm and 4°C. Equal amounts of protein from each sample were fractionated by SDS-PAGE and transferred to pure nitrocellulose. Membranes were blocked with 5% milk in TTBS (Tween 20, Tris base, salt) for 1 h and then incubated with an appropriate dilution of the primary antibody in 5% milk and 0.05% sodium azide in TTBS overnight. The blots were washed three times with TTBS before the addition of the biotinylated second antibody (DAKO, Carpinteria, CA) diluted 1:5000 in TTBS containing 3% bovine serum albumin. Following a 1-h incubation, the blot was washed three times with TTBS and then incubated with streptavidin conjugated to horseradish peroxidase (DAKO) diluted 1:10,000 in TTBS containing 3% bovine serum albumin. The signal was detected by enhanced chemiluminescence (ECL) following the recommendations of the manufacturer (Amersham Biosciences). The co-immunoprecipitations were performed using lysates (20 mM Tris, pH 8.0, 137 mM NaCl, 2 mM EDTA, 1% Triton X-100, 1 mM sodium orthovanadate plus protease inhibitors) prepared from HeLa or A431 cells. Specific antibodies were used to immunoprecipitate Pyk2, Src, or EGF receptor on the appropriate second antibody-coupled magnetic beads (Dynal Corp., Lake Success, NY) from the cell lysates. None of the antibodies cross-reacted with any of the proteins used in this study. The immunoprecipitates were washed six times with the lysis buffer and twice with 500 mM NaCl salt in the same lysis buffer. Subsequently, they were fractionated by SDS-PAGE and analyzed by immunoblotting with the appropriate antibody.
Cell Proliferation Assay-HeLa cells (1 ϫ 10 4 /well) were seeded in 24-well plates. After incubating overnight, the cells were transfected as described above with different plasmids as indicated. Before labeling the cells, the cells were cultured in 0.5% serum-containing medium for 24 h. Then the medium was changed with 10% (v/v) Cell Counting Kit-8 labeling solution (Dojindo Molecular Technologies, Inc., Gaithersburg, MD), and the cells were incubated for about 15 min. The medium from the labeled cells was transferred to a 96-well plate, and the absorbance at 450 nm was measured on a plate reader. For the thymidine incorporation assays, HeLa cells (1 ϫ 10 3 /well) were seeded in a 96-well plate and cultured for 5 h prior to DNA transfection (40 ng/well) using LipofectAMINE 2000 (Invitrogen). 24 h later, the cells were pulsed for 4 h with 0.2 Ci of [ 3 H]thymidine, after which the cells were harvested. The amount of [ 3 H]thymidine incorporated was measured using a -counter following the addition of scintillation fluid.
Stat3
Is Activated by Pyk2 Expression-We used two measures of Stat3 activation: the response of a Stat3-sensitive reporter gene and the status of Stat3 phosphorylation using phosphopeptide-specific antibodies. Phosphorylation of Tyr-705 of Stat3 is required for Stat3 dimerization, nuclear translocation, and DNA binding activity (37). Phosphorylation of Ser-727 of Stat3 enhances its transcriptional activity (38). We first determined whether Pyk2 overexpression resulted in enhanced transcription of a Stat3 reporter gene by transfecting HeLa cells with a luciferase reporter gene that contained four copies of the Stat3 binding site fused to a minimal promoter along with different amounts of the Pyk2 expression construct. Pyk2 introduced into cells by transient transfection has constitutive activity, which can be boosted by upstream activating signals (35). In our experiments, expression of Pyk2 resulted in a modest increase in Stat3 reporter gene activity (Fig. 1A). To detect Stat3 phosphorylation, we transfected expression constructs for Pyk2 and Stat3 into HeLa cells and checked the status of Stat3 phosphorylation using phospho-Stat3 antibodies. Pyk2 dramatically induced the phosphorylation of Stat3 on Tyr-705 and Ser-727, whereas a kinase inactive form of Pyk2 (Pyk2 KD) did not induce Stat3 phosphorylation on the same residues (Fig. 1B). Similar to the Pyk2-induced Stat3 reporter gene activity, enhanced expression of Pyk2 resulted in a higher amount of Stat3 phosphorylation on both residues (Fig. 1C). These results indicate that Pyk2 overexpression can lead to Stat3 activation.
Pyk2 Facilitates Src-induced Stat3 Activation-As a first test of our hypothesis that Pyk2 participates in the activation of Stat3 by c-Src, we co-transfected DNA constructs that express Stat3, activated Src, wild type, and the kinase-inactivated form of Pyk2 and measured Stat3 reporter gene activity and the status of Stat3 Tyr-705 and Ser-727 phosphorylation. In these experiments, we used HeLa cells, which express a low level of endogenous Pyk2. The results show that Pyk2 and activated c-Src individually and additively activate the Stat3 reporter gene ( Fig. 2A). Expression of Pyk2 tended to result in the preferential phosphorylation of Stat3 on Ser-727, whereas activated c-Src resulted in preferential phosphorylation of Stat3 on Tyr-705. Together they induced a very strong phosphoryla-tion of both residues (Fig. 2B). We also examined whether a dominant negative form of Pyk2 would impair the ability of activated c-Src to trigger the phosphorylation of Stat3. We found that co-expression of the dominant interfering form of Pyk2 significantly impairs activated c-Src-induced Stat3 reporter gene activation and the phosphorylation of Stat3 on Tyr-705 and Ser-727 (Fig. 2, A and C). These results indicate that Pyk2 may be a downstream tyrosine kinase involved in c-Src-mediated Stat3 activation.
Pyk2 Associates with Stat3-Because Pyk2 and c-Src individually and together potently activate the Stat3 reporter gene and induce Stat3 phosphorylation and because Pyk2 has been reported to interact with c-Src (34) and c-Src to interact with Stat3 (6), we examined whether we could detect a Pyk2-c-Src complex and a Pyk2-Stat3 complex. Based on the immunoblotting results, each of the proteins expressed well in HeLa cells. Using a lysate from the transfected cells, we immunoprecipitated with anti-Pyk2 or anti-Src antibodies or a hemagglutinin antibody as the negative control and examined for the presence of co-precipitated proteins by Western blotting. We detected Stat3 and activated c-Src in the Pyk2 immunoprecipitates and Stat3 and Pyk2 in the Src immunoprecipitates. Pyk2, Stat3, and activated c-Src could not be detected in the hemagglutinin antibody immunoprecipitate (Fig. 3). Nearly equivalent levels of activated c-Src and Pyk2 immunoprecipitated with the Pyk2 antibody, whereas the Src immunoprecipitates contained significantly less Pyk2. The amounts of Stat3 in the two immunoprecipitations were similar. Because the Stat3 antibody did not efficiently immunoprecipitate Stat3, we could not examine Stat3 immunoprecipitates for the presence of Pyk2 and Src (data not shown).
c-Src Expression Augments Pyk2 Kinase Activity and Results in Its Phosphorylation on Multiple Tyrosines-Next, we examined the effect of c-Src on Pyk2 kinase activity by using an in vitro kinase assay. HeLa cells were transfected with constructs that express Pyk2 or Pyk2 KD in the presence or absence of c-Src, active c-Src, a dominant negative form (c-Src DN), or a kinase-dead form (c-Src KD). We subjected immunoprecipitated Pyk2 to an in vitro kinase assay using poly(Glu,Tyr) (4:1) as a substrate. Both wild type and activated c-Src strongly enhanced Pyk2 kinase activity but had no effect on Pyk2 KD. c-Src DN and c-Src KD slightly enhanced the activity of Pyk2 when compared with its basal activity (Fig. 4A).
We also compared the effects of c-Src, active c-Src, c-Src DN, and c-Src KD on Pyk2 tyrosine phosphorylation using antibodies specific for various PY peptides from Pyk2. These antibodies recognize Pyk2 PY402, an autophosphorylation site and a Srcfamily SH2 domain binding site, which is required for Pyk2 kinase activation; PY579, present in the activation loop of the kinase domain; PY580, also located in the activation loop of the kinase domain; and PY881, a Grb2 SH2 binding site. We used the HeLa cell lysates from the same assay as shown with the Pyk2 kinase assay. Expression of either c-Src or activated c-Src resulted in the phosphorylation of Pyk2 and Pyk2 KD on Tyr-402, Tyr-579, Tyr-580, and Tyr-881, irrespective of the catalytic activity of Pyk2 (Fig. 4, A and B). Therefore, the phosphorylation on Pyk2 by c-Src (whether direct or indirect) does not require Pyk2 kinase activity. Interestingly, the expression of c-Src KD and c-Src DN also led to an increase in Pyk2 phosphorylation on Tyr-402 but failed to have such an effect on Pyk2 KD. We also noted that a small increase in Pyk2 kinase activity accompanied the increased PY402 levels triggered by the c-Src mutant proteins (Fig. 4, A and B). These results suggest that c-Src can enhance Pyk2 activation by two different mechanisms. First, c-Src either directly or indirectly induces the phosphorylation of Tyr-402, Tyr-579, Tyr-580, and Tyr-881 thereby enhancing Pyk2 kinase activity. Second, the interaction of c-Src with Pyk2 apparently facilitates Pyk2 autophosphorylation and autoactivation.
Next, we verified that the wild type and KD form of Pyk2 similarly interacted with the various c-Src proteins. We did not find a significant difference between the ability of wild type and Pyk2 KD to co-immunoprecipitate with the various c-Src proteins; however we did note some differences in the ability of the c-Src proteins to interact with Pyk2. Activated c-Src preferen-tially associated with Pyk2 when compared with the others (Fig. 4C). This result suggests that conformational change associated with c-Src activation may facilitate its interaction with Pyk2. (39 -41). We used HeLa cell growth as a readout of Stat3 activation. We transfected HeLa cells with various combinations of constructs that express Pyk2, activated c-Src, and Stat3 in the presence or absence of various dominant negative versions of Pyk2 or Stat3 and monitored cell growth 24 -30 h later using a colorimetric assay. In this assay, cellular dehydrogenases produce a colored formazan product, which is directly proportional to the number of living cells. We found that expression of Pyk2-enhanced cell growth was 35% above the basal, whereas activated c-Src nearly doubled it. Although Stat3 alone only raised cell growth ϳ20%, the addition of Pyk2 enhanced cell growth 3-fold. The expression of a dominant negative form of Stat3-impaired Pyk2 induced cell growth. The expression of the kinase-dead form of Pyk2 nearly attenuated the c-Src-enhanced cell growth to the basal level (Fig. 5A). Together Pyk2 and Stat3 synergistically induced HeLa cell growth, whereas interfering with endogenous Stat3 activity impaired Pyk2-induced cell growth. Furthermore, the Pyk2 kinase-dead form inhibited activated c-Src-induced cell growth. These results are consistent with a role for Pyk2 in c-Srcinduced Stat3 activation and implicate Pyk2 in Src-mediated cell transformation.
Pyk2 Enhances Stat3 and c-Src-induced Cell Proliferation-One of the biologic readouts of Stat3 activation is enhanced cell proliferation. Stat3 target genes involved in cell survival and proliferation include Bcl-x, Mcl-1, Bcl-2, Myc, and cyclin D1
To complement the colorimetric assay, we also used a traditional thymidine incorporation assay. Although the synergy between Pyk2 and Stat3 was not as evident in this assay, we found that overexpression of Pyk2, activated c-Src, or Stat3 in HeLa cells enhanced the incorporation of [ 3 H]thymidine compared with control cells. The combination of Pyk2 and activated c-Src resulted in the highest level of [ 3 H]thymidine incorporation, whereas the levels observed following the expression of Pyk2 and Stat3 exceeded those observed with either construct alone (Fig. 5B). Because the transfection efficiency of HeLa cells is ϳ60% (i.e. 40% of the cells in the assay do not express the transfected constructs), these results underrepresent the consequences of overexpressing these proteins.
c-Src Dominant Negative or Pyk2 KD Impairs EGF-induced Stat3 Phosphorylation-The mechanism by which receptor tyrosine kinases activate Stat3 has been controversial, although the evidence supports a functional role for c-Src activation. Based on our previous experiments showing that Pyk2 enhances c-Src-mediated Stat3 activation, we tested the effects of expressing the Pyk2 KD on EGF-induced Stat3 activation. We first verified that EGF induced the Stat3 reporter construct in HeLa cells and assessed the role of Pyk2 in EGF-induced Stat3 activation in these cells using either the wild type or the KD form of Pyk2 (Fig. 6A). We found that both Pyk2 expression and EGF treatment resulted in a similar low level activation of Stat3-dependent transcription. The addition of Pyk2 significantly enhanced EGF-induced Stat3-dependent transcription, whereas expression of Pyk2 KD impaired EGF-induced Stat3 activation. Next we switched to A431 cells, which express high endogenous levels of Pyk2, Stat3, and c-Src and very high levels of EGFR. Treatment of A431 cells with EGF resulted in the rapid phosphorylation of Stat3 on Tyr-705 and Ser-727, the phosphorylation of Pyk2 on Tyr-402, and Erk activation as assessed by Western blotting of a phosphospecific antibody (Fig. 6B). The expression of either PYK2 KD or c-Src DN resulted in a dramatic decrease in EGF-induced Stat3 phos- phorylation. c-Src DN had a modest effect on EGF-induced Erk activation, whereas Pyk2 had only a minor effect. The expression of the Pyk2 KD mildly impaired the level of Pyk2 Tyr-402 phosphorylation, whereas c-Src DN had a much more significant effect. These data argue that EGF-mediated c-Src activation facilitates Pyk2 activation and implicate both c-Src and Pyk2 in EGF-induced Stat3 activation in A431 cells. The effect of Pyk2 KD on Pyk2 Tyr-402 phosphorylation may suggest some role for Pyk2 in amplifying EGF-mediated c-Src activation.
EGF Stimulation Results in the Recruitment of c-Src, Pyk2, and Stat3 to the EGFR-Because we had relied on transfected cells to demonstrate associations between Stat3 and Pyk2 and Src and Pyk2, we attempted to find associations between the endogenous proteins following EGF stimulation. Because both c-Src and Stat3 have been associated with the EGFR, we analyzed both the EGFR immunoprecipitates as well as the Pyk2 immunoprecipitates following stimulation of the A431 cells with different concentrations of EGF. In the absence of EGF treatment, we failed to detect significant levels of Pyk2, Stat3, or c-Src in association with the EGFR. However, following EGF treatment, we detected Pyk2, Stat3, and c-Scr in the EGFR immunoprecipitation (Fig. 7A, left panel). In addition, when we examined the Pyk2 immunoprecipitations, we found (following EGF treatment) that we co-immunoprecipitated EGFR, Stat3, and c-Src (Fig. 7A, right panel). We also checked the phosphorylation status of Pyk2, c-Src, Stat3, and Erk following exposure of A431 cells to different concentrations of EGF. EGF stimulation enhanced the levels of phosphorylation of each of the proteins listed above (Fig. 7B). Finally, to get some assessment of the amount of Pyk2 associated with c-Src following EGF treatment, we extensively immunoprecipitated c-Src and Pyk2 from A431 cell lysates treated with EGF (or not treated) and examined the immunoprecipitates as well as the cell lysates prior to and after immunoprecipitation for the relative 5 g, lane 4). After 48 h, the cells were treated with EGF (50 ng/ml) for 15 min. The phosphorylated sites in Stat3, Pyk2, c-Src, and ERK1/2 were detected by immunoblotting with appropriated antibodies. Pyk2, Stat3, c-Src, and ERK1/2 protein levels were detected by immunoblotting with suitable antibodies.
amounts of Pyk2 and c-Src (Fig. 7C). Based on this analysis, a significant portion of the Pyk2 in A431 cells becomes associated with c-Src following EGF signaling. DISCUSSION This study provides several lines of evidence supporting a role for Pyk2 in EGFR-and c-Src-induced Stat3 activation. First, the expression of Pyk2 in HeLa cells results in the activation of a Stat3 reporter gene and the phosphorylation of Stat3 on Tyr-705 and Ser-727, and it enhances the growth of Stat3 overexpressing cells. Second, Pyk2 amplifies c-Src-induced activation of the Stat3 reporter gene and augments c-Src-induced phosphorylation of Stat3 on the same residues. Third, the expression of a kinase-inactivated form of Pyk2 interferes with c-Src-induced Stat3 activation, c-Src-induced cell growth, and EGF-induced Stat3 activation. Fourth, c-Src not only phosphorylates Pyk2 on the multiple tyrosine sites but also strongly induces Pyk2 kinase activity. On the contrary, Pyk2 only weakly induces c-Src kinase activation in HeLa cells. 2 Fifth, in EGF-treated A431 cells, intracellular protein complexes form containing endogenous Pyk2 along with c-Src, EGFR, or Stat3, arguing that EGF signaling recruits c-Src, Pyk2, and Stat3 to EGFR.
Growth factors apparently activate Stat3 in a manner largely independent of JAKs but dependent upon the activation of Src kinases (42). Recombinant Src family kinase SH3 domains can mediate a direct interaction with Stat3 (24). In Rat-2 fibroblasts, the expression of Hck, a Src family kinase member, with a disrupted SH3 domain resulted in a failure to activate Stat3 and a reduced transforming activity. However, as mentioned previously in studies utilizing WB epithelial cells, Hep3B, and NIH 3T3 cells, the Btk family tyrosine kinase Etk functioned as an intermediary between c-Src and Stat3 activation (25). In co-transfection assays, Etk co-immunoprecipitated with Stat3, suggesting that Etk may directly phosphorylate Stat3. MEKK1 has also been shown to have a role in Stat3 activation (43). Overexpression of MEKK1 led to Stat3 activation, and a kinase-inactive form of MEKK1 inhibited EGFinduced Stat3 activation. In an in vitro kinase assay, MEKK1 phosphorylated Stat3 on Ser-727, and in vivo its expression led to phosphorylation of Stat3 on Tyr-705 through a pathway that involved c-Src and JAKs. Further confusing the issue, activated Rac1 induces Stat3 activation, and a dominant negative form of Rac1 inhibits EGF-induced Stat3 activation as assessed by phosphospecific Stat3 immunoblotting and reporter gene activation (44). In EGF-stimulated COS-1 cells that had previously been transfected with Stat3, a Rac1 immunoprecipitate contained Stat3, implying an interaction between Rac1 and Stat3. Our studies now also implicate Pyk2 in EGF-and c-Srcinduced Stat3 activation.
Although we cannot provide a clear synthesis of all these studies, some conclusions can be drawn. First, Src kinases play an essential role in Stat3 activation in all these studies, with the possible exception of the Rac1 study, which implicated Jak2 as the downstream kinase. Second, Etk (and perhaps other Btk kinases) and Pyk2 may play an analogous role; both amplify c-Src-mediated Stat3 activation. The relative importance of these proteins may depend upon their expression levels and intracellular localization. Strong Pyk2 expression is found in the brain and in hematopoietic cells, although many other cell types also express it (45). In addition, Pyk2 is found in many tumors, including glioblastoma, astrocytoma, lymphoma, breast carcinoma, prostate carcinoma, lung carcinoma, and hepatocellular carcinoma. Third, both Rac activation and MEKK1 activation can be tied to Pyk2 activation. For example, in cardiac fibroblasts, angiotensin II induces Pyk2 activation, which leads to Rac1 activation and MEKK1-dependent c-Jun NH 2 -terminal kinase (JNK) activation (46). In addition, Pyk2 is involved in the control of chemokine-and integrin-mediated Rac activation and associates with the Rac exchange factor, Vav1 (47). In another study, EGF potently activated MEKK1 and resulted in the association of Rac1 with MEKK1 (48). Inhibitory mutants of MEKK1 blocked Rac1-induced JNK activation. Although Pyk2 also couples stress signals to the JNK pathway, an involvement of MEKK1 in those pathways has not been clearly established. Overall, despite these links between Pyk2, Rac1, and MEKK1 activation, their relative importance in growth factor-induced Stat3 activation will require further study. Fourth, Rac1 has emerged as a mediator of v-Src-induced transformation. In v-Src-transformed cells, Rac1 activity is high, and both Vav2 and Tiam1 (another Rac exchange factor) are phosphorylated on tyrosine residues (49). Although the activation of Tiam1 and Vav2 was attributed to v-Src activity, Pyk2 might function to amplify the activation of the Rac exchange factors as it does in Stat3 activation.
Our data indicate that Pyk2 functions predominantly down- . The levels of EGFR, Pyk2, Stat3, Erk1/2, and c-Src in the cell lysates were also detected by immunoblotting (right panel). C, relative amounts of endogenous c-Src associated with Pyk2 following EGF signaling. A431 cells treated with EGF (ϩ) or not treated (Ϫ) were immunoprecipitated with either a Pyk2-specific antibody or a c-Src-specific antibody. The immunoprecipitates were immunoblotted with the same antibodies (left panel). To reduce the impact of the immunoglobulin heavy chain band (*) on the detection of the c-Src band, the immunoblot was cut just beneath the c-Src band. The cell lysates (Ly), either prior to or following the immunoprecipitations, were immunoblotted for Pyk2 or c-Src (right panel).
stream of c-Src in EGF receptor signaling. Activated c-Src results in potent Pyk2 activation, whereas (as indicated above) we found that Pyk2 has only a modest effect on c-Src activation. However, other studies have implicated Pyk2 as upstream of c-Src activation (50). Irrespective of its location in c-Src signaling, we would argue that Pyk2 functions as an amplifier to augment c-Src signaling to downstream pathways. Activated Pyk2 may directly or indirectly phosphorylate Stat3 on Tyr-705; however, its induction of Ser-727 phosphorylation must occur indirectly. In our study, the expression of Pyk2 in HeLa cells led to a more prominent phosphorylation of Stat3 on Ser-727 as compared with Tyr-705. The consensus of studies favors a MAPK module as the mediator of Stat3 Ser-727 phosphorylation; p38, Erk, and JNK have all been implicated (43,(51)(52)(53)(54). As a known activator of the MAPK modules (55,56), Pyk2 likely triggers Ser-727 Stat3 phosphorylation via these modules.
Besides growth factor receptors, signaling through a variety of GPCRs also leads to Stat3 activation (57)(58)(59). Predominantly, GPCRs that link to either G o or G q subfamily members have been associated with Stat3 activation. In the majority of the studies describing GPCR-triggered Stat3 activation, the JAKs have been ascribed major roles. Possible physical association between the JAKs and the angiotensin II AT1 receptor (60), platelet-activating factor receptor (61), and chemokine receptors has been reported (62). Similar to the growth factor receptors, Rac1 activation plays a prominent role in Stat3 activation following the exposure of vascular smooth muscle cells to either angiotensin II or thrombin (63). In vascular smooth muscle cells, angiotensin II signaling also leads to prominent Pyk2 activation, and Pyk2 has been found associated with Jak2 constitutively. Furthermore, two distinct Pyk2 dominant negative forms interfered with angiotensin II-induced activation of Jak2 (20). GPCRs may use a number of mechanisms to activate Pyk2, including increases in intracellular Ca 2ϩ triggered by the activation of phospholipase C (G qor G␥-mediated) and via the activation of G 13 (29). Preliminary experiments have supported a role for Pyk2 in GPCR signaling to Stat3 activation. In those experiments, the exposure of HeLa cells (transfected previously with the M1 muscarinic receptor) to carbachol resulted in a weak increase in Stat3 phosphorylation on Tyr-705 and Ser-727. However, the cotransfection of a modest amount of Pyk2 led to a dramatic increase in Stat3 phosphorylation on the same residues, which was much higher than we observed with carbachol alone or following Pyk2 expression. 3 Thus, in those cells that express adequate levels of Pyk2, it may also serve to help link GPCR signaling to Stat3 activation.
In conclusion, Pyk2 plays a significant role in enhancing Stat3 activation following EGF signaling and may be involved in Src-mediated cell transformation and in GPCR signaling leading to Stat3 activation. Also linking c-Src, Pyk2, and Stat3 are their known roles in cell migration. In numerous studies, Pyk2 has emerged as a key mediator linking receptor signaling to critical downstream signaling pathways. | 6,627.6 | 2004-04-23T00:00:00.000 | [
"Biology"
] |
A comparative review of viral entry and attachment during large and giant dsDNA virus infections
Viruses enter host cells via several mechanisms, including endocytosis, macropinocytosis, and phagocytosis. They can also fuse at the plasma membrane and can spread within the host via cell-to-cell fusion or syncytia. The mechanism used by a given viral strain depends on its external topology and proteome and the type of cell being entered. This comparative review discusses the cellular attachment receptors and entry pathways of dsDNA viruses belonging to the families Adenoviridae, Baculoviridae, Herpesviridae and nucleocytoplasmic large DNA viruses (NCLDVs) belonging to the families Ascoviridae, Asfarviridae, Iridoviridae, Phycodnaviridae, and Poxviridae, and giant viruses belonging to the families Mimiviridae and Marseilleviridae as well as the proposed families Pandoraviridae and Pithoviridae. Although these viruses have several common features (e.g., topology, replication and protein sequence similarities) they utilize different entry pathways to infect wide-range of hosts, including humans, other mammals, invertebrates, fish, protozoa and algae. Similarities and differences between the entry methods used by these virus families are highlighted, with particular emphasis on viral topology and proteins that mediate viral attachment and entry. Cell types that are frequently used to study viral entry are also reviewed, along with other factors that affect virus-host cell interactions.
Introduction
Viruses utilize several mechanisms to enter host cells. This review focuses on the relationships between the external topology of the virions and their entry mechanisms in different cell types, as well as the roles of cellular receptors and viral attachment factors. Ten viral families are discussed, including Adenoviridae, Baculoviridae, Herpesviridae, and nucleocytoplasmic large DNA viruses (NCLDVs). The NCLDVs include large and giant viruses characterized by their large virions and genomes, and can be classified into several distinct families: Ascoviridae, Asfarviridae, Iridoviridae, Mimiviridae, Marseilleviridae, Phycodnaviridae and Poxviridae. They also include members of the proposed families Pandoraviridae and Pithoviridae as well as the recently isolated molivirus and faustovirus [1][2][3][4]. They replicate completely or partially in the cytoplasm and are larger than other viruses. They may also have several common traits, including similarities in their protein sequences and topological features. Figure 1 shows the external topology of each viral family. They might be evolutionary related and share a common ancestor [5,6]. It has been proposed that the NCLDVs be classified into one order, named "Megavirales" [7], whereas, herpesviruses belong to the order Herpesvirales. Generally, mimiviruses and phycodnaviruses are closely related to pandoraviruses and moliviruses, whereas pithoviruses are related to marseilleviruses, iridoviruses and ascoviruses, and faustovirus are closely related to asfarviruses, [1-4, 8, 9].
Virus attachment and receptors
Viruses attach to proteins known as cellular receptors or attachment factors on the surface of the host cell [11,12]. In addition, certain membrane lipids and glycans may be necessary for viral entry. These factors stabilize the virus on the cell surface and allow it to circumvent the cell's barriers to entry. High-affinity interactions between viral proteins and cellular receptors drive conformational changes in the proteins' structures that activate signaling cascades and destabilize the plasma membrane, leading to pore formation and internalization of the virus as shown in Figure 2a [13]. These interactions can be initiated by specific motifs or domains in both viral and host proteins. Notable viral protein motifs that facilitate entry by binding to cellular counterparts include the integrin-binding (RGD), endocytosis (PPxY and Yxx[FILV]), and clathrin endocytosis (PWxxW) motifs, where "x" denotes any residue [14]. It is worth noting that a receptor could be accompanied by an additional co-receptor that triggers a particular entry pathway or stabilizes the virus at plasma membrane.
General mechanisms of virus entry
Cells can internalize viruses by endocytosis, as reviewed elsewhere [11][12][13][15][16][17] and depicted in Figure 2. Alternatively, the virus may fuse with the cell membrane. Several factors determine which entry mechanism will be active, including the cell type and the cellular receptors it displays. Aspects of the virus' external topology, such as the presence of surface protrusions or glycoproteins, may also affect the entry process. Viruses enter host cells via one of three major pathways: (A) Fusion: Viral proteins promote the fusion of the virion with the plasma membrane, which then form a pore, and the virion becomes uncoated. Its genomic cargo is then transferred into the cytoplasm [12,13,[18][19][20][21]. The proteins involved in fusion, so-called fusogens, can be divided into three classes: (i) class I fusogens, which are dominated by α-helical coils; (ii) class II fusogens, which consist predominantly of β-sheets; and (iii) class III fusogens, which feature both secondary structure types.
(B) Cell-cell fusion: Some viruses such as vaccinia virus (VV) and herpes simplex virus (HSV) induce the expression of proteins on the surfaces of infected cells that attract uninfected cells and cause them to fuse with the infected cell at low pH values to form a multinuclear cell known as a syncytium [11,13,22,23]. Syncytium formation represents a very efficient way for a virus to spread within a host: it circumvents the immune response and creates a good site of replication for a nuclear-replicating virus. It should be noted that syncytium formation is not always regarded as an entry mechanism per se.
(C) Endocytosis: Once the cell internalizes the virus, it is then delivered to an acidic pit, a so-called early endosome. The virus then may be transferred into a late endosome and then to a lysosome. Alternatively, due to the low pH value in the lumen of endosomes, the viral membrane can fuse with the endosomal membrane, releasing the viral genome into the cytoplasm [12]. After exiting from endosomes, some adenoviruses or poxviruses may use microtubules for transport within the cytoplasm. Once in cytoplasm, some viruses move toward the nucleus to deliver their cargo inside the nucleus, whereas the NCLDVs usually remain in cytoplasm to initiate their replication cycle. Dynamin GTPase may have a key role in regulating most endocytic pathways. During virus entry, dynamin is deposited in the neck of the endocytic pit toward the cytoplasm leading to the excision of [24,25]. There are several major endocytosis-based pathways that viruses can use to enter cells and evade the host's immune system. These pathways differ in terms of the types of particles involved and the molecules that are important in the process. The most important viral entry pathways are as follows: (1) Phagocytosis (cell eating), which occurs in specialized mammalian cells (so-called professional phagocytes, e.g., dendritic cells and macrophages) that engulf large and essential particles. Viral entry by this pathway typically involves the formation of large extracellular projections, and the internalized virus is taken into a phagosome. Actin and RhoA are typically necessary for this process. (2) Pinocytosis (cell drinking), which is the process by which cells take up solutes and fluids. Pinocytotic processes can be further classified based on the membrane structures and types of molecules they are associated with. Macropinocytosis is a nonspecific process, and particles internalized by this route may not be essential for the cell. When it is exploited by viruses, interactions between viral proteins and cell receptors activate intracellular signaling and actin rearrangements that form ruffles or filopodia on the external surface of the host cell. The ruffles then close up to form a vesicle known as a macropinosome, which carries the virus into the cytosol. Actin, Rho GTPases (Rac and Cdc42), PI3K, and Na+/H+ exchange are usually required for this pathway, and kinases are required to regulate macropinosome formation and closure. Although dynamin might not be required for some viruses to enter via macropinocytosis, some strains of adenoviruses and poxviruses require dynamin to enter the cell. (3) Clathrin-mediated endocytosis, which is the process by which the cell internalizes the virus in a clathrinrich flask-shaped invagination/cavity (vesicle) known as a clathrin-coated pit. The virus is then delivered into the cytoplasm via endosomes. Clathrin and cholesterol are required, and dynamin and transferrin are usually involved in pit formation.
(4) Caveolar/raft endocytosis, which is similar to clathrin-mediated endocytosis but involves pits containing caveolin-1 rather than clathrin. The internalized virus is delivered to the cytoplasm in cave-like bodies known as caveolae or caveosomes, whose internal pH is neutral. (5) Endocytosis based on other routes. These pathways involve vesicles that contain neither clathrin nor caveolin. However, like the clathrin-and caveolin-based pathways, they generally require dynamin, cholesterol and/or lipids. Interestingly, lymphocytic choriomeningitis virus uses a dynamin-, clathrin-, and caveolinindependent route that is also independent of actin, lipid rafts, and the pH [26,27].
Mechanisms of attachment and entry utilized by large and giant DNA viruses
Members of all ten viral families covered in the review infect a wide range of potential hosts, including humans, other mammals, invertebrates, fish, protozoa, and algae, causing serious problems in public health, livestock farming, and aquaculture ( Table 1). As suggested by this diversity of potential hosts, they can use many different mechanisms to enter host cells, and members of the same viral family may use very different mechanisms to enter a given host cell type.
To ensure an efficient virus infection, a virus may utilize more than one mechanism to enter a given host cell.
Adenoviridae
Adenoviruses (Ad) are non-enveloped icosahedral viruses with diameters of 70-90 nm ( Fig. 1) that can be divided into seven groups and 50+ serotypes. They harbor 30 to 40-kb linear dsDNA genomes encoding around 45 proteins, and they replicate in the nucleus. Their genomes encode fiber proteins with a conserved N-terminal tail, a shaft, and a globular knob domain. The lengths of these fibers are similar within a serotype, but Ad-F and Ad-G encode two fiber proteins: short and long [28,29]. The fibers bind to a wide range of cell receptors [30]; upon binding at the plasma membrane, the fibers become detached from the viral core and remain at the surface, while the core enters the cell [30][31][32]. The coxsackie-adenovirus receptor (CAR) is a functional receptor for most Ad strains [33]; it is expressed in the tight junctions in the epithelial cells of some human tissues (brain, heart and pancreas) and various tumor cells, but not in mice or primates [34,35] ( Table 2). The long viral fibers are flexible enough to permit the fiber knob to interact with CAR, bringing the penton base of the viral capsid into contact with integrins in the host cell membrane. Other cellular receptors targeted by adenoviruses include CD46, CD80, CD86, desmoglein-2, heparan sulphate, sialic acid, major histocompatibility complex-1-α2, and vascular cell adhesion molecule-1. Ad-2, Ad-5 and egg drop syndrome virus enter host cells via clathrin-mediated endocytosis [36][37][38], whereas Ad-3, Ad-5 and Ad-35 enter via macropinocytosis [37,39]. Longer lists of cellular receptors and entry pathways exploited by adenoviruses are given in Tables 2 and 3.
Baculoviridae
Baculoviruses are arthropod-specific enveloped virus with nucleocapsid dimensions of 21 × 260 nm (Fig. 1). They have circular dsDNA genomes of 80-180 kb that encode 100-180 proteins and replicate in the nucleus. They are used in biocontrol against insects, and as vectors for gene transfer and protein expression. Consequently, their entry into insect, human, and cancer cells has an increasing biological impact (see Tables 1 and 3). Two baculovirus phenotypes have been characterized: budded and occlusion-derived. Viruses of this family express two crucial fusogens, gp64 (class III) and F (class I), which are functionally analogous and can both trigger low-pH membrane fusion during endocytosis. There are evidences that gp64 facilitate virus entry and fusion with the plasma membrane [167][168][169][170]. Bombyx mori nucleopolyhedrovirus (BmNPV) enters Bombyx mori (BmN) cells via cholesterol-dependent macropinocytosis [171], while Autographa californica multiple nucleopolyhedrovirus (AcMNPV) grown in Spodoptera frugiperda (sf9) cells enters human hepatocarcinoma (HepG2) and embryonic kidney (293) cell lines via a dynamin-, raftand RhoA-dependent phagocytosis-like mechanism [172], but clathrin-mediated endocytosis or macropinocytosis may not be involved in the virus uptake. However, recombinant AcMNPV from sf21 cells enters BHK-21 cells via low-pH clathrin-mediated endocytosis [173]. Additionally, a pseudotyped vesicular stomatitis virus (VSV) encoding gp64 grown in Sf9 cells enters the Huh7 and 293 cells via [160] macropinocytosis and endocytosis, which is mediated by viral gp64, and cellular cholesterol, dynamin and clathrin [169]. This process also requires the host cell proteins HSPG and syndecan-1 [174], as well as cholesterol [169,175].
Poxviridae
Poxviruses are widely distributed enveloped viruses (∼360 × 270 × 250 nm) that replicate in the cytoplasm ( Fig. 1) [176]. They harbor a 130 to 375-kb linear genome that encodes ~200 proteins. Vaccinia virus (VV) is a prototypic virus of this class that was used as a smallpox vaccine. It exists in three forms [177,178]. The first is the mature virion (MVs, also known as the intracellular mature virus, IMV or INV), which has a brick-shaped structure; it is the most abundant, stable and simple form and is active in hosthost transmission. The second form is the wrapped virion (WV or intracellular enveloped virus, IEV), which contains an MV core wrapped in two membranes. WVs travel to the cell periphery via microtubules and fuse with the plasma membrane, and they are then released by exocytosis as the third form, the extracellular virion (EV, or cell-associated extracellular enveloped virus, CEV, or extracellular enveloped virus, EEV), which is specialized for exiting and cellto-cell transmission within the host. Four proteins are used for attachment to the cell surface (A26, A27, D8 and H3), and the MV displays the so-called entry-fusion complex (EFC), which consists of 11 proteins (A16L, A21L, A28L, F9, G3L, G9R, H2, J5, L1R, L5R and O3L). These proteins interact with one another and mediate virus-cell fusion, membrane disruption, and cell-to-cell fusion [176,179,180] (Tables 3 and 4). Inhibition of any of these proteins destabilizes the complex and hence perturbs viral entry. MV enters host cells via endocytosis or fusion with the plasma membrane, leaving the virus in endosomes [179][180][181][182][183][184] (see Table 3). Notably, the mechanisms of fusion for MVs and EVs at the plasma membrane and endosome are identical, and both require EFC proteins. VV (MV/EV), WR, and IHD-J enter HeLa cells via macropinocytosis [132,[134][135][136][137][138][139] and have also been suggested to enter via a parallel endocytotic mechanism [138]. In Drosophila, VV enters DL1 cells by macropinocytosis [147], but it enters S2 cells via endocytosis [148].
Giant viruses (Mimiviridae and Marseilleviridae)
These families comprise the largest known viruses, so-called giant viruses (GVs). They have genomes of ~0.5-2.5 Mb that encode 400-2500 proteins, and they replicate in the cytoplasm. Representatives of these families have been isolated from diverse habitats, including bronchoalveolar lavage fluid [204] and stools [205] from patients with pneumonia, insects [206], and leeches [207] (for a detailed review, see reference [208], [209]). The nature of the relationship between giant viruses and pneumonia remains to be elucidated [209][210][211][212]. Briefly, the giant viruses were detected by serological and genomic methods in patients with respiratory symptoms. Moreover, recent images show giant virus-and virus factorylike structures in number of human cells [213].
Mimivirus virions are 500 nm in diameter, with a 1 Mb dsDNA genome encoding 900 proteins. Their surfaces are completely covered with fibers (120 nm long) attached to the capsid via a disc-shaped feature except at one capsid vertex (Fig. 1). The outer fibers may play some role in the virus' attachment to or entry into host cells [214,215], but the details of its mechanisms of attachment and entry are unknown. Proteomic and gene silencing experiments revealed that the fibres consist of at least four proteins (R135, L725, L829, and R856); viruses in which any of these proteins are silenced exhibit short and deformed fibers [214,[216][217][218][219], as shown in Figure 3. Further structural analysis showed that R135 is a component of the fibers and is required for host cell entry [219]. In addition, a electron microscopy showed that L725 aggregates form fibre-like architectures [217]. The fibers' shape differs from that in other viruses, and the fiber proteins exhibit no sequence similarity to proteins encoded by other viruses. It should be noted that some giant viruses lack external fibers -for instance, marseilleviruses (which are 200 nm in diameter with 350-kb circular dsDNA genomes) have topologies similar to those of mimiviruses but have only short (12 nm) or no fibers [216]. Grouper spleen cell line pH-dependent clathrin-endocytosis and macropinocytosis [163]; the deletion of VP088 envelope protein inhibits viral entry [164]. Large yellow croaker iridovirus Bluegill fry (BF-2) cells 037L (RGD motif) ∞ integrins inducing fusion [165,166] Mimiviruses enter amoebae or macrophages via a phagocytosis-like mechanism that depends on dynamin, actin and PI3-K [220,221]. Unlike poxviruses, the entire virion with fiber can be seen inside the host. Further analyses showed that individual Marseillevirus virions enter A. castellanii cells via phagocytosis or in vesicles, endocytosis and micropinocytosis, were also suggested, but remain to be investigated [222]. Because the closely related Mimiviruses enter cells via phagocytosis, it seems very plausible that Marseillevirus could also enter via such a mechanism. It should be noted that the original host of most giant virus strains, including APMV, is not known; neither amoebae nor macrophages are their natural hosts. The tropism of these viruses and their interactions with their natural host cells thus remain to be elucidated.
Phycodnaviridae
The Phycodnaviridae are marine enveloped viruses with dimensions of 100-220 nm that have 330 to 560-kb linear dsDNA genomes and replicate in the cytoplasm of algae (Fig. 1). Despite having algal hosts, their entry pathways resemble those used by bacteriophages and animal viruses. Paramecium bursaria chlorella virus (PBCV-1) attaches to host cells via a viral vertex and degrades the host cell wall at the site of attachment like a bacteriophage [223]. To this end, it encodes chitinases, chitosanase, β -1,3-glucanase, and alginase enzymes that catalyze cell wall lysis [224]; it also encodes potassium ion channel proteins, which have a putative role in entry [225,226]. After entry, PBCV leaves an empty shell at the cell surface. Another member of this family, Emiliania huxleyi virus 86, enters host cells via endocytosis or fusion of the outer lipid membrane surrounding the capsid, which is similar to animal virus entry [227]. The intact virion can be seen in the cytoplasm before the capsid breaks down to release the genome. Ectocarpus fasciculatus virus infects zoospores or gametes of brown algae that lack cell walls [228]. It fuses with the outer plasma membrane of the host cell, leaving the capsid outside the cell surface, and injects its genomic cargo into the cytoplasm.
Iridoviridae
The iridoviruses include both enveloped and non-enveloped viruses with dimensions of 120-350 nm that replicate in the cytoplasm of insect and fish cells (Fig. 1). They harbor 100 to 200-kb linear dsDNA genomes with circularly permuted and redundant termini. The enveloped viruses fuse with the cell membrane of the host cell, whereas the non-enveloped viruses enter via endocytic pathways [236] (see Table 3). Frog virus 3, tiger frog virus, and infectious spleen and kidney necrosis virus enter BHK-21, HepG2 and Mandarin fish fry cells, respectively, by endocytosis [159][160][161][162]. The VP088 protein encoded by SGIV facilitates both endocytosis and macropinocytosis into a grouper spleen cell line [163,164].
Ascoviridae
These viruses (~130 nm diameter, 200-400 nm in length) infect invertebrates; they replicate in the nucleus and harbor 150 to 190-kb circular dsDNA genomes that encode 180 proteins (Fig. 1). They are phylogenetically related to iridoviruses, and their entry mechanisms are obscure. However, Heliothis virescens ascovirus-3e infections are known to require actin rearrangement [237].
Conclusion and future perspectives
Viruses enter host cells via several mechanisms, depending on the host cell type and viral strain. Concerns about the risks of viral outbreaks have prompted efforts to characterize emerging pathogens and predict the emergence and properties of new viruses. A further motivating factor for such studies is the possibility of developing non-cytotoxic antiviral drugs that act outside host cells by preventing viral attachment or entry rather than disrupting viral replication inside cells. This review details the entry pathways exploited by large dsDNA viruses. Their entry pathways are affected by several factors, including the external topology of the virions (particularly the presence of surface protrusions and their topology), the targeted cell type, the cellular receptors that are present, and the viral protein content.
Fig. 3
Silencing any one of the four fiber-associated proteins in mimivirus produces viruses bearing short and deformed fibers compared to the wild-type control (WT). The images are adapted from reference [216] While viruses from the same viral family often have similar topologies and encode proteins with similar sequences and structures, they may still use different entry mechanisms. As mentioned in Table 3, the virus protein(s) may bind to one or more receptors and co-receptors (see herpesviruses for examples). The binding may activate number of factors (proteins/pathways) that are relevant to infection. These factors could be characteristics of other entry pathways (see, for example, entry of KSHV). Additionally, the MV form of vaccinia virus can enter cells by direct fusion with either the plasma membrane or the membrane of a vesicle after endocytosis.
It is worth emphasizing that additional factors could affect the entry mechanism. Among these factors is protein sequence similarity; some viral proteins exhibit functional and structural similarities despite having little or no sequence similarity. For example, the HSV-1 protein gB is a class III fusogen that resembles (especially in its post-fusion conformation) the gG protein of the RNA rhabdovirus VSV and the baculovirus protein gp64 [72,[238][239][240][241]. Additionally, the EBV protein gp42 is a functional homolog of HSV gD, but the two share no sequence similarity [110]. The functional motifs of viral proteins appear to play central roles in determining the entry pathways available to specific viruses, so their analysis could enable prediction of entry pathways and virus-host cell interactions [14,242]. Closely related viruses that infect the same host generally have similar functional motif profiles [242]. Another factor that may be important is ubiquitination of viral proteins inside host cells, which can affect infection and microtubule trafficking. For instance, the adenovirus protein VI recruits Nedd4 E3 ubiquitin ligases via interactions involving its PPxY motif [14,61,243,244]. Biophysical factors may also affect viral entry. For example, the entry of CMV into vascular endothelial cells is promoted by low levels of shear stress [245]. Similarly, the fusion of the enveloped HSV requires a negative curvature of the lipid bilayer and can thus be suppressed by factors that prevent the formation of such negative curvature [246].
Differences in observed entry pathways for different strains or different samples of the same viral strain may be due to differences in experimental design and conditions [61], the use of a non-physiological host in vitro (e.g., nonwild-type cells), or the use of a laboratory strain whose gene content differs from that of the wild-type virus, as in the case of CMV [64]. It is generally accepted that cell lines (i.e., immortalized cells) often differ genetically and phenotypically from cells in native tissues (or primary cells). Consequently, the type of cell used when studying viral entry may profoundly affect the results obtained. It has also been shown that baculoviruses grown in different insect cell types enter mammalian cells via different mechanisms [247]. These results clearly show that there are several aspects of viral entry into host cells that are very poorly understood. Comparative studies could potentially shed important light on this topic and help to clarify unknown aspects of virus-host cell interactions. In addition, more comprehensive information on viral topology and protein sequences will help to understand virus tropism. Further studies in this area should focus on predicting viral entry mechanisms and the evolution of interactions between host cells and viruses. Efforts should also be made to identify optimal experimental conditions for viral entry in different cell types and for different viral families. | 5,440.4 | 2017-09-02T00:00:00.000 | [
"Biology"
] |
Variety of Iron Oxide Inclusions in Sapphire from Southern Vietnam: Indication of Environmental Change during Crystallization
: Sapphires from alluvial deposits associated with Cenozoic basalts in Southern Vietnam were collected for investigation of mineral inclusions. In this report, primary iron oxide inclusions were focused on, with detailed mineral chemistry using a Raman spectroscope and electron probe micro-analyzer. Consequently, a variety of iron oxide inclusions were recognized as wüstite, hercynite, and ilmenite. Ilmenite falling within an ilmenite–hematite series ranged in composition between Il 24-30 He 36-38 Mt 35-40 and Il 49-54 He 34-40 Mt 7-10 , classified as titanomagnetite and titanohematite, respectively. Wüstite with non-stoichiometry, (Fe 2+0.3-0.9 )(Ti 3+<0.179 Al 3+ ≤ 0.6 Cr 3+<0.1 Fe 3+ ≤ 0.46 ) (cid:3) ≤ 0.23 O, was associated with hercynite inclusions, clearly indicating cogenetic sapphire formation. Wüstite and sapphire appear to have been formed from the breakdown reaction of hercynite (hercynite = sapphire+wüstite) within a reduction magma chamber. Titanohematite and titanomagnetite series might have crystallized during iron–titanium reequilibration via subsolidus exsolution under a slightly oxidized cooling process.
Introduction
Iron oxide minerals have been considered as a significant geothermometer for their host rocks [1][2][3][4][5][6]. Previous investigations of gem sapphire from Southern Vietnam, such as Dak Nong, Di Linh, and Binh Thuan deposits, have reported several iron oxide inclusions [7][8][9]. Most of these iron oxides were identified as ilmenite, magnetite-hercynite, and chromite-hercynite, using a scanning electron microscope-energy dispersive spectrometer (SEM-EDS) and an X-ray diffractometer (XRD) [8]. Moreover, Izokh et al. [7] reported an iron oxide inclusion, namely, Al-Ti-hematite, that was chemically analyzed by an electron probe micro-analyzer (EPMA); subsequently, they proposed that crystallization of host sapphire should relate to iron-rich syenitic melt and metasomatism between crustal rocks and contaminated basaltic melt in the Dak Nong deposit. Recently, Vu et al. [9] reported spinel and other unidentified oxide inclusions in sapphires from many deposits in Southern Vietnam which more details and further investigation are reported herein this manuscript. Besides, the crystallization of sapphire and related zircon from Southern Vietnam may occur in the lithospheric mantle which is related to carbonatite-dominant melts as a result of partial melting of a metasomatized lithospheric mantle source, at over 900 • C [10].
Although iron oxide inclusions were previously reported, their mineral chemistry has never been fully analyzed. This study was therefore designed to analyze most types of iron oxide inclusion in sapphire from Dak Nong, Di Linh, Binh Thuan, Krong Nang, and Pleiku.
Geological Setting
Southern Vietnam geologically consists of a large-scale structure of the Da Lat active continental margin (Da Lat zone), Indosinian polyepisodic orogenic belt (Srepok orogenic belt), and Kontum massif [11], which is part of the Indosinian craton [12] (Figure 1). This region is composed of Archean-Proterozoic basement rocks, Early to Middle Paleozoic cover rocks, Jurassic sediments, late Mesozoic rocks, and Cenozoic basaltic rocks [12] (Figure 2). The basement rocks in this area are characterized by metamorphic complexes of granulites which are usually covered by volcanogenic sedimentary rocks, metamorphosed sedimentary rocks within greenschist facies of the Early to Middle Paleozoic, as well as sandstone, siltstone, and shale of Jurassic sedimentary formation. These basements and upper rocks are intruded by a number of late Mesozoic igneous rocks, including Triassic granite, granodiorite, and granosyenite (results of the Indosinian-Yangtze collision during the Permo-Triassic about 245-240 Ma) and Cretaceous diorite and granodiorite (result of the Paleo-Pacific plate subduction) [13][14][15]. Late Cenozoic basalts associated with sapphire deposits in Southern Vietnam have been mapped, overlining the older rock formations reported above. [11] and Hoa et al. [12].
These Late Cenozoic basalts ( Figure 2) are related to regional tectonic terranes, particularly after the end of East Sea opening in the Middle Miocene [16][17][18]. Paleo-Pacific oceanic crustal material, previously subducted into the lower mantle, was subsequently entrained into the Hainan plume which was the main cause of basaltic magmatism in Southern Vietnam [19]. According to Hoang and Flower [20], these volcanic activities were extended more than 100 km in diameter with a thickness up to several hundred meters and covered a total area of approximately 23,000 km 2 . The centers of volcanism appear to have developed during two main eruptive episodes. The early phases generated mainly quartz and olivine tholeiites with rare alkali basalt whereas the later phases produced [11] and Hoa et al. [12].
These Late Cenozoic basalts ( Figure 2) are related to regional tectonic terranes, particularly after the end of East Sea opening in the Middle Miocene [16][17][18]. Paleo-Pacific oceanic crustal material, previously subducted into the lower mantle, was subsequently entrained into the Hainan plume which was the main cause of basaltic magmatism in Southern Vietnam [19]. According to Hoang and Flower [20], these volcanic activities were extended more than 100 km in diameter with a thickness up to several hundred meters and covered a total area of approximately 23,000 km 2 . The centers of volcanism appear to have developed during two main eruptive episodes. The early phases generated mainly quartz and olivine tholeiites with rare alkali basalt whereas the later phases produced olivine tholeiite, alkali basalt, basanite, and rare nephelinite. Tholeiite eruptions occurred significantly in the centers associated with extensional rifting. On the other hand, alkali basalt, olivine tholeiite, and basanite appear to have erupted along the conjugate strike-slip faults [18]. It should be noted that sapphire and zircon occurrences mainly discovered in Quaternary and Upper-Pleistocene alluvial deposits derived from the alkali basalts [8][9][10]21]. They would originate in the deep-seated formations before being transported as megacryst onto the Earth's surface by alkali basaltic magmas. [18]. It should be noted that sapphire and zircon occurrences mainly discovered in Quaternary and Upper-Pleistocene alluvial deposits derived from the alkali basalts [8][9][10]21]. They would originate in the deep-seated formations before being transported as megacryst onto the Earth's surface by alkali basaltic magmas.
Materials and Methods
Sapphire collections of about 5 mm crystals were sampled from Dak Nong, Di Linh, Binh Thuan, Krong Nang, and Pleiku in Southern Vietnam (see Figure 2 and Table 1 and Table S1). The sapphire crystals were detected for metallic opaque inclusions for further study. After being mounted in epoxy, they were ground by a diamond wheel until inclusions were exposed; subsequently, they were polished using 6 µm, 3 µm, and 1 µm diamond pastes. These inclusions were initially characterized using a laser Raman spectroscope, inVia model, Renishaw equipped with a Leica optical microscope at the Gem and Jewelry Institute of Thailand (Public Organization) (GIT). The laser beam was generally set at about 5µm with standard conditions including 532 nm radiation of the NIR diode laser emitting at 785 nm and power of 15.7 mW (about 5 mW on the sample), at a resolution of approximately 0.5 cm −1 . It should be noted that Raman patterns of iron minerals may be transformed rapidly by laser induction, leading to the modification of Raman shift, and the same effect can also be induced by the natural processes such as oxidation, recrystallization, order-disorder transitions (cation redistribution), phase transition, or decomposition [22][23][24]. Therefore, low laser power (0.5 mW) and normal operation power (5
Materials and Methods
Sapphire collections of about 5 mm crystals were sampled from Dak Nong, Di Linh, Binh Thuan, Krong Nang, and Pleiku in Southern Vietnam (see Figure 2 and Table 1 and Table S1). The sapphire crystals were detected for metallic opaque inclusions for further study. After being mounted in epoxy, they were ground by a diamond wheel until inclusions were exposed; subsequently, they were polished using 6 µm, 3 µm, and 1 µm diamond pastes. These inclusions were initially characterized using a laser Raman spectroscope, inVia model, Renishaw equipped with a Leica optical microscope at the Gem and Jewelry Institute of Thailand (Public Organization) (GIT). The laser beam was generally set at about 5µm with standard conditions including 532 nm radiation of the NIR diode laser emitting at 785 nm and power of 15.7 mW (about 5 mW on the sample), at a resolution of approximately 0.5 cm −1 . It should be noted that Raman patterns of iron minerals may be transformed rapidly by laser induction, leading to the modification of Raman shift, and the same effect can also be induced by the natural processes such as oxidation, recrystallization, order-disorder transitions (cation redistribution), phase transition, or decomposition [22][23][24]. Therefore, low laser power (0.5 mW) and normal operation power (5 mW) on the sample were applied in this study to observe the Raman pattern and its alteration in wüstite inclusions because its structure is sensitive to laser power. However, ilmenite and hercynite spinel were analyzed using normal operation laser power of 5 mW, as suggested by Wang et al. [25]. Each spectrum was recorded within the spectral range of 200 cm −1 to 1500 cm −1 for 20 s of exposure time, 6 accumulation, and 50× magnification, at a laboratory temperature of about 22 • C.
After identification using the Raman technique, these inclusion samples were carbon coated prior to major and minor analyses using an electron probe microanalyzer (EPMA, JEOL model JXA-8100) at the Department of Geology, Faculty of Science, Chulalongkorn University. The analytical condition was set at 15 kV acceleration voltages with about 24 nA probe current. Appropriate standards and analytical crystals were selected for analyses with 30 s for peak counts and background counts of each element before automatic ZAF correction, requested for three main effects (i.e., atomic number, absorption, and fluorescence excitation) influencing spectroscopic analysis of characteristic X-rays, was applied to report in oxide contents. Finally, atomic proportions of these analyses were recalculated on the basis of proper oxygen in their formular before Fe 2+ and Fe 3+ ratios were estimated accordingly using the equation of Droop [26].
Results
Although all of the studied iron oxide minerals looked similarly black opaque crystals, their morphological affinities were classified into two groups, octahedral and rhombohedral shapes, indicating significantly primary inclusions. Octahedral iron oxides formed as single and aggregate crystals ranging in size from 20-500 µm ( Figure 3). In contrast, rhombohedral iron oxides occurred typically as single crystals, about 80-600 µm long, and 40-300 µm wide ( Figure 4). Raman spectroscopic identification of these iron oxide inclusions is reported below. mW) on the sample were applied in this study to observe the Raman pattern and its alteration in wüstite inclusions because its structure is sensitive to laser power. However, ilmenite and hercynite spinel were analyzed using normal operation laser power of 5 mW, as suggested by Wang et al. [25]. Each spectrum was recorded within the spectral range of 200 cm −1 to 1500 cm −1 for 20 s of exposure time, 6 accumulation, and 50× magnification, at a laboratory temperature of about 22 °C.
After identification using the Raman technique, these inclusion samples were carbon coated prior to major and minor analyses using an electron probe microanalyzer (EPMA, JEOL model JXA-8100) at the Department of Geology, Faculty of Science, Chulalongkorn University. The analytical condition was set at 15 kV acceleration voltages with about 24 nA probe current. Appropriate standards and analytical crystals were selected for analyses with 30 s for peak counts and background counts of each element before automatic ZAF correction, requested for three main effects (i.e., atomic number, absorption, and fluorescence excitation) influencing spectroscopic analysis of characteristic X-rays, was applied to report in oxide contents. Finally, atomic proportions of these analyses were recalculated on the basis of proper oxygen in their formular before Fe 2+ and Fe 3+ ratios were estimated accordingly using the equation of Droop [26].
Results
Although all of the studied iron oxide minerals looked similarly black opaque crystals, their morphological affinities were classified into two groups, octahedral and rhombohedral shapes, indicating significantly primary inclusions. Octahedral iron oxides formed as single and aggregate crystals ranging in size from 20-500 µm ( Figure 3). In contrast, rhombohedral iron oxides occurred typically as single crystals, about 80-600 µm long, and 40-300 µm wide ( Figure 4). Raman spectroscopic identification of these iron oxide inclusions is reported below. Three distinctive types were recognized by Raman spectra, including wüstite, hercynite, and ilmenite groups. In comparison with morphological features, wüstite and hercynite inclusions typically showed octahedral shape in forms of single and aggregate crystals. Moreover, wüstite-hercynite composite inclusions ( Figure 5) could be found. On the other hand, ilmenite inclusions were mostly characterized by rhombohedral single crystals.
At the low laser power (0.5 mW), all studied wüstite inclusions clearly showed only a sharp peak at 670 cm −1 , which closely resembled magnetite spectra ( Figure 6a). On the other hand, at high laser power (5 mW), some wüstite inclusions (e.g., DL50, DL56, PT17) showed characteristic patterns of magnetite (weaker peak in the range of 650-670 cm −1 ), hematite with higher intensity peaks at 218, 285, and 388, and wüstite assigned by the 595 cm −1 peak, as suggested by Hanesch [23] (Figure 6b). These spectra were matched well with wüstite spectra observed at low and high laser powers reported by Faria et al. [22]. In addition, some wüstite inclusions showed a broader band in the 650-670 cm −1 region (strongest peak at approximately 667 cm −1 ) of magnetite with higher intensity (Figure 6c). The Raman spectrum of hercynite is representatively displayed in Figure 7. The spectrum yielded a strong band at 753 cm −1 and a weak peak at 701 cm −1 which indicate the Three distinctive types were recognized by Raman spectra, including wüstite, hercynite, and ilmenite groups. In comparison with morphological features, wüstite and hercynite inclusions typically showed octahedral shape in forms of single and aggregate crystals. Moreover, wüstite-hercynite composite inclusions ( Figure 5) could be found. On the other hand, ilmenite inclusions were mostly characterized by rhombohedral single crystals. Three distinctive types were recognized by Raman spectra, including wüstite, hercynite, and ilmenite groups. In comparison with morphological features, wüstite and hercynite inclusions typically showed octahedral shape in forms of single and aggregate crystals. Moreover, wüstite-hercynite composite inclusions ( Figure 5) could be found. On the other hand, ilmenite inclusions were mostly characterized by rhombohedral single crystals.
At the low laser power (0.5 mW), all studied wüstite inclusions clearly showed only a sharp peak at 670 cm −1 , which closely resembled magnetite spectra (Figure 6a). On the other hand, at high laser power (5 mW), some wüstite inclusions (e.g., DL50, DL56, PT17) showed characteristic patterns of magnetite (weaker peak in the range of 650-670 cm −1 ), hematite with higher intensity peaks at 218, 285, and 388, and wüstite assigned by the 595 cm −1 peak, as suggested by Hanesch [23] (Figure 6b). These spectra were matched well with wüstite spectra observed at low and high laser powers reported by Faria et al. [22]. In addition, some wüstite inclusions showed a broader band in the 650-670 cm −1 region (strongest peak at approximately 667 cm −1 ) of magnetite with higher intensity (Figure 6c). The Raman spectrum of hercynite is representatively displayed in Figure 7. The spectrum yielded a strong band at 753 cm −1 and a weak peak at 701 cm −1 which indicate the At the low laser power (0.5 mW), all studied wüstite inclusions clearly showed only a sharp peak at 670 cm −1 , which closely resembled magnetite spectra (Figure 6a). On the other hand, at high laser power (5 mW), some wüstite inclusions (e.g., DL50, DL56, PT17) showed characteristic patterns of magnetite (weaker peak in the range of 650-670 cm −1 ), hematite with higher intensity peaks at 218, 285, and 388, and wüstite assigned by the 595 cm −1 peak, as suggested by Hanesch [23] (Figure 6b). These spectra were matched well with wüstite spectra observed at low and high laser powers reported by Faria et al. [22]. In addition, some wüstite inclusions showed a broader band in the 650-670 cm −1 region (strongest peak at approximately 667 cm −1 ) of magnetite with higher intensity (Figure 6c). peak, 683 cm −1 , was due to ilmenite, as suggested by Wang et al. [25], whereas 299 and 498 cm −1 peaks were caused by hematite [22]. Additionally, some ilmenite inclusions revealed broadened peaks at 220, 293, and 613 cm −1 which matched well with the hematite's pattern; moreover, an additional characteristic peak of ilmenite was present around 683 cm -1 as well as weak peaks at 399, 600, and 1298 cm −1 , likely indicating oxidized titanomagnetite (Figure 8b) [24,25]. These different Raman patterns indicated that ilmenite inclusions in this study may have a variety of chemical compositions. The Raman spectrum of hercynite is representatively displayed in Figure 7. The spectrum yielded a strong band at 753 cm −1 and a weak peak at 701 cm −1 which indicate the vibration of AlO 4 tetrahedra, characteristic of a spinel structure, as suggested by Cynn et al. [27]. This spectrum was similar to the hercynite spinel reported by Ospitali et al. [28].
vibration of AlO4 tetrahedra, characteristic of a spinel structure, as suggested by Cynn et al. [27]. This spectrum was similar to the hercynite spinel reported by Ospitali et al. [28].
An ilmenite inclusion showed Raman characteristics of mixture minerals of which Raman peaks were recognized at 299, 498, and 683 cm −1 (Figure 8a). The highest intensity peak, 683 cm −1 , was due to ilmenite, as suggested by Wang et al. [25], whereas 299 and 498 cm −1 peaks were caused by hematite [22]. Additionally, some ilmenite inclusions revealed broadened peaks at 220, 293, and 613 cm −1 which matched well with the hematite's pattern; moreover, an additional characteristic peak of ilmenite was present around 683 cm -1 as well as weak peaks at 399, 600, and 1298 cm −1 , likely indicating oxidized titanomagnetite (Figure 8b) [24,25]. These different Raman patterns indicated that ilmenite inclusions in this study may have a variety of chemical compositions. An ilmenite inclusion showed Raman characteristics of mixture minerals of which Raman peaks were recognized at 299, 498, and 683 cm −1 (Figure 8a). The highest intensity peak, 683 cm −1 , was due to ilmenite, as suggested by Wang et al. [25], whereas 299 and 498 cm −1 peaks were caused by hematite [22]. Additionally, some ilmenite inclusions revealed broadened peaks at 220, 293, and 613 cm −1 which matched well with the hematite's pattern; moreover, an additional characteristic peak of ilmenite was present around 683 cm −1 as well as weak peaks at 399, 600, and 1298 cm −1 , likely indicating oxidized titanomagnetite (Figure 8b) [24,25]. These different Raman patterns indicated that ilmenite inclusions in this study may have a variety of chemical compositions. Mineral chemistry of these iron oxide inclusions, based on the EPMA, clearly supported Raman spectroscopic identification. Representative EPMA analyses of each iron oxide type are shown in Tables 1-3. Mineral chemistry of these iron oxide inclusions, based on the EPMA, clearly supported Raman spectroscopic identification. Representative EPMA analyses of each iron oxide type are shown in Tables 1-3. (Figure 9). It should be noted again that some hercynite was found in close association with wüstites in sapphire from Binh Thuan (sample PT55, as shown in Figure 5, with EPMA analyses presented in Tables 1 and 2). Note: * previously reported by Vu et al. [9]. Fe 2+ and Fe 3+ were recalculated from total FeO after the method of Droop [26]. ΣR 2+ = Fe 2+ + Mn + Mg + Zn + Ca + Ni. ΣR 3+ = Si + Ti + Al + Cr + Fe 3+ .
Classification of Iron Oxide Inclusions
Based on morphological and Raman spectroscopic characteristics combined with EPMA analyses, iron oxide minerals in sapphire from Southern Vietnam can be grouped into wüstite, hercynite, and ilmenite. These EPMA analyses are newly reported for the mineral chemistry of these oxide inclusions in sapphires from Southern Vietnam. This is the first report of wüstite found as an inclusion in gem sapphire. Its composition, Fe 1−x O with 0.04 < x < 0.12, should be stable at < 570 • C and can be transformed between α-Fe and Fe 3 O 4 [31]. As reported above, low power laser (0.5 mW) excitation can decrease alterations on the wüstite surface and then generate a Raman spectrum very similar to the pattern of magnetite (Fe 3 O 4 ) (Figure 6a). This indicates that the structure of wüstite is made up of non-equivalent sites, called non-stoichiometric wüstite (FeO), because the structure of non-equivalent sites of non-stoichiometric wüstite is similar to that of magnetite (Fe 3 O 4 ) [32]. However, this spectrum changes to a strong decrease in half-intensity peak positions of the magnetite structure (Fe 3 O 4 ) and clearly displays new peaks of hematite structure (Fe 2 O 3 ) plus a peak at 595 cm −1 attributed to wüstite, as noticed by Hanesch [23] (Figure 6b) at high laser power (5 mW). This effect supports that wüstite's surface with a magnetite structure might be transformed into a hematite structure by the activation of high laser power. The irreversible transformation of magnetite to hematite caused by a lowering of the temperature in the natural process is commonly known as matitization [33]: Besides, a strong increase in the peak around 667 cm -1 (magnetite) at high laser power was visible in the other wüstite inclusions (Figure 6c), which may be attributed to the Raman active vibrations of spinel groups [27], indicating more spinel components in the structure. They are comparable to the EPMA analyses that yielded non-stoichiometric wüstite (FeO) and wüstite-hercynite ( Table 1). The hercynite component (e.g., 23 percent in sample PT49) obtained in non-stoichiometric wüstites (Table 1) should affect the intensity of such a magnetite peak (Figure 6c).
Raman spectroscopic features of hercynite spinel inclusions clearly belong to the spinel group. Additionally, EPMA analyses yielded the composition of hercynite (Table 2 and Figure 9). This is the first report of wüstite found as an inclusion in gem sapphire. Its composition, Fe1−xO with 0.04 < x < 0.12, should be stable at < 570 °C and can be transformed between α-Fe and Fe3O4 [31]. As reported above, low power laser (0.5 mW) excitation can decrease alterations on the wüstite surface and then generate a Raman spectrum very similar to the pattern of magnetite (Fe3O4) (Figure 6a). This indicates that the structure of wüstite is made up of non-equivalent sites, called non-stoichiometric wüstite (FeO), because the structure of non-equivalent sites of non-stoichiometric wüstite is similar to that of magnetite (Fe3O4) [32]. However, this spectrum changes to a strong decrease in halfintensity peak positions of the magnetite structure (Fe3O4) and clearly displays new peaks of hematite structure (Fe2O3) plus a peak at 595 cm −1 attributed to wüstite, as noticed by Hanesch [23] (Figure 6b) at high laser power (5 mW). This effect supports that wüstite's surface with a magnetite structure might be transformed into a hematite structure by the activation of high laser power. The irreversible transformation of magnetite to hematite caused by a lowering of the temperature in the natural process is commonly known as matitization [33]: Besides, a strong increase in the peak around 667 cm -1 (magnetite) at high laser power was visible in the other wüstite inclusions (Figure 6c), which may be attributed to the Raman active vibrations of spinel groups [27], indicating more spinel components in the structure. They are comparable to the EPMA analyses that yielded non-stoichiometric wüstite (FeO) and wüstite-hercynite ( Table 1). The hercynite component (e.g., 23 percent in sample PT49) obtained in non-stoichiometric wüstites (Table 1) should affect the intensity of such a magnetite peak (Figure 6c).
Raman spectroscopic features of hercynite spinel inclusions clearly belong to the spinel group. Additionally, EPMA analyses yielded the composition of hercynite (Table 2 and Figure 9). Raman spectroscopic features of ilmenite inclusions only exhibited the mixture patterns of ilmenite and hematite, called ilmenite solid solutions. In addition, the broadened peaks at 399, 600, and 1298 cm −1 of hematite ( Figure 8b) were caused by the oxidation of titanomagnetite. This oxidation seems to be influenced by ilmenite exsolution in the Ti-rich magnetite (titanomagnetite) which was reported previously in igneous intrusive from Bijigou, Pazhihua, and Xinjie in China [35]. Meanwhile, the EPMA analyses of ilmenite solid solutions clearly defined titanohematite and titanomagnetite (Table 3 and Figure 10). peaks at 399, 600, and 1298 cm −1 of hematite (Figure 8b) were caused by the oxidation of titanomagnetite. This oxidation seems to be influenced by ilmenite exsolution in the Tirich magnetite (titanomagnetite) which was reported previously in igneous intrusive from Bijigou, Pazhihua, and Xinjie in China [35]. Meanwhile, the EPMA analyses of ilmenite solid solutions clearly defined titanohematite and titanomagnetite (Table 3 and Figure 10). From the results, Raman spectroscopic features of ilmenite inclusions clearly exhibited ilmenite solid solutions (e.g., ilmenohematite and titanomagnetite), which are characteristic of rhombohedral solid solutions [25]; on the other hand, hercynite spinel and wüstite have cubic and octahedral shapes [30]. The EPMA shows attributes of wüstite and hercynite in octahedral iron oxide inclusions, whereas rhombohedral iron oxides favor ilmenite-hematite (titanohematite and titanomagnetite). The morphological characteristics of the iron oxide minerals in sapphire from Southern Vietnam match the results of Raman and EPMA analyses.
Crystallization Environment
Iron oxide inclusions in sapphire from Southern Vietnam are mostly characterized by euhedral crystals (rhombohedral and octahedral) which appear to be primary inclusions [36]. Therefore, their chemical and physical conditions can be used to reconstruct the crystallization environment of the host sapphire.
Non-stoichiometric wüstite appears to have formed under a strongly reducing condition [29,37]. Furthermore, the co-existing wüstite and hercynite inclusions in sapphire from Southern Vietnam ( Figure 5) may be involved by the hercynite's breakdown reaction process [38,39]: FeAl2O4 (hercynite) = Al2O3 (sapphire/ruby) + FeO (wüstite) Therefore, cogenetic wüstite and hercynite inclusions should be formed in a strongly reducing environment related to the hercynite breakdown reaction process. On the other hand, titanomagnetite and titanohematite inclusions should be generated by sub-solidus re-equilibration [1,40] which suggests an oxidizing environment. The compositions of titanomagnetite inclusions vary towards hematite components without ulvӧspinel components (Table 3) which should be re-equilibrated under the slow cooling oxidized sub-solidus environment [1,41]. This supports the complete oxidation reaction of spinelss by the From the results, Raman spectroscopic features of ilmenite inclusions clearly exhibited ilmenite solid solutions (e.g., ilmenohematite and titanomagnetite), which are characteristic of rhombohedral solid solutions [25]; on the other hand, hercynite spinel and wüstite have cubic and octahedral shapes [30]. The EPMA shows attributes of wüstite and hercynite in octahedral iron oxide inclusions, whereas rhombohedral iron oxides favor ilmenitehematite (titanohematite and titanomagnetite). The morphological characteristics of the iron oxide minerals in sapphire from Southern Vietnam match the results of Raman and EPMA analyses.
Crystallization Environment
Iron oxide inclusions in sapphire from Southern Vietnam are mostly characterized by euhedral crystals (rhombohedral and octahedral) which appear to be primary inclusions [36]. Therefore, their chemical and physical conditions can be used to reconstruct the crystallization environment of the host sapphire.
Non-stoichiometric wüstite appears to have formed under a strongly reducing condition [29,37]. Furthermore, the co-existing wüstite and hercynite inclusions in sapphire from Southern Vietnam ( Figure 5) may be involved by the hercynite's breakdown reaction process [38,39]: Therefore, cogenetic wüstite and hercynite inclusions should be formed in a strongly reducing environment related to the hercynite breakdown reaction process. On the other hand, titanomagnetite and titanohematite inclusions should be generated by sub-solidus re-equilibration [1,40] which suggests an oxidizing environment. The compositions of titanomagnetite inclusions vary towards hematite components without ulvöspinel components (Table 3) which should be re-equilibrated under the slow cooling oxidized subsolidus environment [1,41]. This supports the complete oxidation reaction of spinel ss by the re-equilibration: 4Fe 3 O 4 (in exsolution) + O 2 = 6Fe 2 O 3 (in ilmenite) to ilmenite exsolution in titanomagnetite.
This information indicates an environmental change during the formation of these oxide inclusions as well as their host sapphire. Initial crystallization should take place in a strongly reducing (indicated by wüstite occurrences) magma chamber prior to the slow cooling sub-solidus stage under a low oxidizing condition (based on phase transformation from titanomagnetite to titanohematite). This process may be related to oxygen fugacity and temperature changing towards the slow cooling process.
Inclusions and their host sapphire from Southern Vietnam were suggested to have formed in the lower crust [7,9] and in the lithospheric mantle [10]. In this study, wüstite with a cubic shape should be typified by crystallization at a temperature of about 570 • C [31]. This thermal process should be located in the continental crust (≤800 • C) [42]. Furthermore, wüstite appears to have occurred in the continental crust, as suggested by Seifert et al. [29], who reported wüstite in fluorapatite crystallized from S-type granite melts. Additionally, hercynite has been recognized in magmatic sapphires also formed in crust [43,44]. In addition, titanomagnetite inclusion with Il 24-30 components seems to be formed in plutonic rocks under a crustal environment, as suggested by Buddington et al. [45]. Therefore, coexisting wüstite, hercynite, titanohematite, and titanomagnetite clearly indicate that these oxide inclusions and their host sapphire should have crystallized directly from magma in crust instead of mantle.
Under the silica-saturated condition, iron oxide minerals (i.e., wüstite, hercynite, titanomagnetite, and titanohematite) are not stable; all Fe 2+ atoms preferentially enter silicate structures instead. Therefore, wüstite, hercynite, titanomagnetite, and titanohematite may only be formed in the silica-undersaturated environment. Titanomagnetite indicated a higher temperature (at about 600-700 • C) than the ilmenite-hematite miscibility gap [46]. Titanomagnetite should crystallize directly from magma prior to sub-solidus reequilibration of iron-titanium oxides during a slow cooling process within a low oxidizing environment. This late state re-equilibration led to a decrease in magnetite in spinel ss , with fO 2 increasing slightly.
Conclusions
Iron oxide mineral inclusions provide useful indicators of crystallization conditions of their host sapphire from Southern Vietnam. These iron oxide inclusions include wüstite, hercynite, titanomagnetite, and titanohematite series. Crystal morphology, Raman spectroscopy, and mineral chemical signatures of these iron oxides indicate silicaundersaturated magmatic origin in the lower crust. Wüstite might have crystallized from the hercynite's breakdown reaction (hercynite = sapphire + wüstite), whereas titanomagnetite and titanohematite series should have formed in sub-solidus re-equilibration within the slow cooling process. These results indicate an environmental change during the crystallization process of sapphire, wüstite, and hercynite in the reducing magma chamber prior to a slow cooling sub-solidus re-equilibrium of titanomagnetite to titanohematite with low oxidizing conditions. Supplementary Materials: The following are available online at https://www.mdpi.com/2075-163 X/11/3/241/s1, Table S1: Sapphire samples used in this study. Funding: The first author was supported for her PhD study by "the Scholarship Program for ASIAN Countries Chulalongkorn University". Institutional Review Board Statement: "Not applicable" for studies not involving humans or animals. | 7,131.6 | 2021-02-26T00:00:00.000 | [
"Geology"
] |
High-Performance Supercapacitors Using Compact Carbon Hydrogels Derived from Polybenzoxazine
Polybenzoxazine (PBz) aerogels hold immense potential, but their conventional production methods raise environmental and safety concerns. This research addresses this gap by proposing an eco-friendly approach for synthesizing high-performance carbon derived from polybenzoxazine. The key innovation lies in using eugenol, ethylene diamine, and formaldehyde to create a polybenzoxazine precursor. This eliminates hazardous solvents by employing the safer dimethyl sulfoxide. An acidic catalyst plays a crucial role, not only in influencing the microstructure but also in strengthening the material’s backbone by promoting inter-chain connections. Notably, this method allows for ambient pressure drying, further enhancing its sustainability. The polybenzoxazine acts as a precursor to produce two different carbon materials. The carbon material produced from the calcination of PBz is denoted as PBZC, and the carbon material produced from the gelation and calcination of PBz is denoted as PBZGC. The structural characterization of these carbon materials was analyzed through different techniques, such as XRD, Raman, XPS, and BET analyses. BET analysis showed increased surface of 843 m2 g−1 for the carbon derived from the gelation method (PBZGC). The electrochemical studies of PBZC and PBZGC imply that a well-defined morphology, along with suitable porosity, paves the way for increased conductivity of the materials when used as electrodes for supercapacitors. This research paves the way for utilizing heteroatom-doped, polybenzoxazine aerogel-derived carbon as a sustainable and high-performing alternative to traditional carbon materials in energy storage devices.
Introduction
The ongoing energy crisis is driving the urgent need for clean and sustainable energy sources like solar, wind, and hydrogen power.However, integrating these renewables into the existing power grid requires robust energy storage solutions.Among the various contenders-lithium-ion batteries, zinc-ion batteries, and supercapacitors-supercapacitors hold immense promise due to their unique properties.Supercapacitors boast exceptional power density, allowing for rapid charging and discharging [1,2].They also exhibit remarkable cycling stability, lasting through numerous charge/discharge cycles without significant degradation.Additionally, supercapacitors prioritize safety, making them ideal for applications where battery flammability poses a risk.These advantages have propelled supercapacitors into widespread use, powering everything from portable electronics to automotive systems and even new energy plants.The heart of a supercapacitor's performance lies in its electrode materials [3][4][5].
Traditionally, carbon-based materials have dominated this space, with graphene emerging as a frontrunner due to its exceptional physical and chemical properties.However, graphene's path to supercapacitor supremacy has not been without hurdles.A significant challenge lies in the strong attraction between graphene sheets, causing them to Gels 2024, 10, 509 2 of 16 clump together.This aggregation hinders the flow of ions, essential for energy storage, and reduces the surface area available for interaction.In turn, this translates into a lower electric double-layer capacitance (EDLC), the mechanism by which supercapacitors store energy [6].Additionally, graphene electrodes tend to have low packing density, hindering their ability to store energy efficiently within a limited volume.Finally, pure graphene lacks the necessary sites for pseudocapacitive reactions, which contribute significantly to energy density.Researchers are actively tackling these limitations to unlock the full potential of graphene-based supercapacitors.Strategies to prevent sheet aggregation, increase packing density, and introduce pseudocapacitive sites are at the forefront of this endeavor.By overcoming these challenges, graphene-based supercapacitors can revolutionize energy storage, paving the way for a future powered by efficient, reliable, and clean energy solutions [7][8][9].
Scientists are constantly innovating to improve the performance of graphene electrodes in supercapacitors.One key area of focus is packing density.Techniques like vacuum filtration and mechanical compression have shown promise in this regard, leading to more densely packed graphene and potentially improved volumetric performance.However, there is a catch: these methods can cause significant clumping of the graphene sheets.This clumping acts as a barrier to ions, the charged particles crucial for fast energy storage and release.This ultimately hinders the rate capacity, i.e., the device's ability to deliver power quickly.To address the aggregation issue, various studies have explored the use of physical spacers between graphene layers.These spacers are intended to prevent the sheets from clumping together too tightly, allowing for better ion movement.Despite this, when aggregation becomes too pronounced, these spacers may fail to ensure adequate electrolyte penetration into the material.This results in a reduced effective specific surface area, limiting the energy storage capacity of the sample [10][11][12][13].
An alternative approach involves incorporating pseudocapacitive materials into the graphene structure.This strategy not only increases the packing density but also enhances the pseudocapacitive properties of graphene.Pseudocapacitive materials can significantly boost the energy density of graphene-based electrodes.However, this comes with tradeoffs; the introduction of these materials can adversely affect the high-current charging and discharging performance of the graphene, ultimately shortening its service life.While these materials offer the potential for achieving ultra-high energy density, careful consideration of their impact on the overall performance and longevity of the electrode is crucial [14][15][16][17][18].
The pseudocapacitance is brought about by doping graphene with specific heteroatoms like boron, nitrogen, sulfur, and phosphorus.These elements significantly improve the material's pseudocapacitive performance.Unlike battery-type pseudocapacitive materials, the Faradic reactions occurring between the doped functional groups and the electrolyte exhibit superior reversibility.This translates into a dramatic improvement in the cycle life of the graphene supercapacitors.Additionally, heteroatom doping can fine-tune the electronic transport properties and hydrophilicity of graphene.These modifications optimize ion transport and enhance the rate capability, leading to supercapacitors with exceptional performance across various metrics.
In essence, this approach leverages the unique properties of GO and strategic manipulation of the final structure to create a new generation of supercapacitor electrodes with superior performance, cyclability, and overall functionality [19][20][21][22].
This research builds upon previous advancements [23] by introducing a polymer gel-based carbon electrode with exceptional potential for energy storage applications.The key component of this electrode is a custom-designed benzoxazine monomer (Bzo) comprised of eugenol and ethylene diamine.This Bzo monomer forms the foundation for polybenzoxazine (PBz), a class of advanced resins boasting several advantages: Superior Stability-PBzs exhibit minimal water absorption and shrinkage during formation.Additionally, they undergo catalyst-free polymerization and retain their shape effectively.Tailored Functionality-the molecular structure of PBz can be strategically modified to suit specific applications.The Bzo monomer synthesis utilizes a Mannich condensation reaction, yielding a self-polymerizable structure upon heating.This process results in a PBz rich in hydroxyl (-OH) and amine (-NH 2 ) groups.Two different polybenzoxazine-based carbon materials were produced.One through carbonization of PBz, namely PBZC, and the other through the gelation and carbonization of Pbz, namely PBZGC.To assess the performance of these polybenzoxazine-based carbon materials, a thorough analysis was conducted within a three-electrode system.Rigorous electrochemical evaluations were employed to determine their energy storage capacity and stability under various conditions.The results convincingly demonstrated the remarkable durability and efficiency of PBZC and PBZGC, positioning them as highly promising materials for energy-storage applications.The innovative composition and robust structural properties of these PBz-based carbon materials signify their immense potential for revolutionizing energy storage technology.
Synthesis of EEd-Bzo Monomer
The synthesis of the EEd-Bzo monomer proceeded through Mannich condensation.The detailed procedure was as follows: a three-necked, round-bottom flask equipped with a magnetic stirrer and a reflux condenser served as the reaction vessel.The process began by dissolving paraformaldehyde (1.8 g, 0.06 m) in 20 mL of dimethyl sulfoxide (DMSO) at 50 • C with continuous stirring.Ethylene diamine (1.34 mL, 0.02 m) was then added dropwise into the reaction mixture.Simultaneously, a separate solution was prepared by dissolving eugenol (3.28 g, 0.02 m) in 10 mL of DMSO.Once the addition of ethylene diamine was complete, the eugenol solution was gradually added dropwise to the reaction mixture, while increasing the temperature up to 120 • C, and the reaction proceeded for 3 h at this temperature.The formation of pale-yellow solution indicated the completion of the reaction.After cooling to room temperature, the reaction solution was precipitated in 1 N NaOH and meticulously washed with distilled water to eliminate impurities, yielding the EEd-Bzo monomer.The final product was separated via filtration and dried under vacuum at 50 • C for 12 h to ensure the complete removal of moisture.
Conversion of EEd-Bzo Monomer to PBZC
The synthesized EEd-Bzo monomer underwent a stepwise thermal curing process.The benzoxazine monomer, EEd-Bzo, was subjected to progressively increasing temperatures of 100, 150, 200, and finally 250 • C, with each stage lasting for 2 h.This curing process transformed the EEd-Bzo monomer into a self-cured polybenzoxazine (PBz) network.Following the curing step, the poly (EEd-Bzo) was further processed by carbonization.This involved heating the material under a nitrogen atmosphere to 600 • C for 5 h, with a controlled heating rate of 1 • C/min.The resulting carbonized material was then thoroughly mixed with an aqueous KOH solution, with subsequent removal of water from the mixture through evaporation at 120 • C. The final stage involved the activation of the carbonized material, where the mixture was heated up to 800 • C (with a heating rate of 5 • C/min and holding at this temperature for 3 h) in a tubular furnace under continuous nitrogen flow.The final product obtained from this step was the desired PBZC material.Scheme 1 provides a visual illustration of the entire synthesis and activation process for PBZC.This method offers a reliable approach for synthesizing a high-quality benzoxazine monomer and its subsequent transformation into a well-defined carbonized and activated material, making it suitable for diverse applications.The meticulous control of temperature and reaction times at each stage is critical for achieving the desired chemical transformations and obtaining PBZC with the targeted properties.
Tailoring Porous Carbon: The Intricate Synthesis of Polybenzoxazine Aerogel Carbon (PBZGC)
The synthesis of PBZGC is a fascinating journey, meticulously crafting a porous carbon material with exceptional properties.This process, outlined in Scheme 1, can be broken down into several key stages: The first step involves crafting a critical starting material-the precursor sol.Here, a precise amount of hydrochloric acid (HCl) was dissolved in N, N-dimethylformamide (DMF), to create a controlled acidic environment.This solution was then meticulously combined with EEd-Bzo, a key building block for the desired polymer network.Additional DMF ensures thorough mixing, resulting in a homogenous precursor sol after a short stirring period at room temperature.The precursor sol was then carefully poured into molds, where a two-step thermal treatment transformed it into a gel-like state known as an alcogel.This transformation involves a meticulously controlled heating program.First, the molds are heated at a moderate temperature of 120 °C for a specific duration of 2 h.Subsequently, the temperature is elevated further to 140 °C and maintained for an extended period of 48 h.This staged heating allows for controlled polymerization and network formation within the sol.On completion of the heating process, the alcogel was meticulously cooled to room temperature.To remove the solvent trapped within the gel network, a solvent exchange-procedure was implemented.The alcogel was sequentially submerged in ethanol and water at room temperature.This process was repeated three times for each solvent, ensuring complete removal of the original DMF.Additionally, the solvent was changed every 12 h, maximizing the efficiency of the exchange process.Following solvent exchange, the delicate task of drying the alcogel commenced.Here, a supercritical CO2 drying technique was employed, where the alcogel was frozen in liq.N2 for 10 min and then dried in a freeze dryer, FDA5518 model, manufactured by Ilshin Biomass, for a period of 72 h.This method allows for the gentle removal of the remaining solvent without compromising the intricate pore structure of the gel.Next, the dried gels undergo a crucial transformation-carbonization.This process involves heating the gel to a high temperature (600 °C) under a controlled nitrogen atmosphere, following a ramp rate of 1 °C/min to ensure uniform conversion.The high temperature treatment transforms the organic polymer network into a carbon framework, retaining the desired porous structure.Further, the carbonized gel undergoes activation by intimately mixing it with potassium hydroxide (KOH) in a specific ratio (1:2), and then
Tailoring Porous Carbon: The Intricate Synthesis of Polybenzoxazine Aerogel Carbon (PBZGC)
The synthesis of PBZGC is a fascinating journey, meticulously crafting a porous carbon material with exceptional properties.This process, outlined in Scheme 1, can be broken down into several key stages: The first step involves crafting a critical starting material-the precursor sol.Here, a precise amount of hydrochloric acid (HCl) was dissolved in N, Ndimethylformamide (DMF), to create a controlled acidic environment.This solution was then meticulously combined with EEd-Bzo, a key building block for the desired polymer network.Additional DMF ensures thorough mixing, resulting in a homogenous precursor sol after a short stirring period at room temperature.The precursor sol was then carefully poured into molds, where a two-step thermal treatment transformed it into a gel-like state known as an alcogel.This transformation involves a meticulously controlled heating program.First, the molds are heated at a moderate temperature of 120 • C for a specific duration of 2 h.Subsequently, the temperature is elevated further to 140 • C and maintained for an extended period of 48 h.This staged heating allows for controlled polymerization and network formation within the sol.On completion of the heating process, the alcogel was meticulously cooled to room temperature.To remove the solvent trapped within the gel network, a solvent exchange-procedure was implemented.The alcogel was sequentially submerged in ethanol and water at room temperature.This process was repeated three times for each solvent, ensuring complete removal of the original DMF.Additionally, the solvent was changed every 12 h, maximizing the efficiency of the exchange process.Following solvent exchange, the delicate task of drying the alcogel commenced.Here, a supercritical CO 2 drying technique was employed, where the alcogel was frozen in liq.N 2 for 10 min and then dried in a freeze dryer, FDA5518 model, manufactured by Ilshin Biomass, for a period of 72 h.This method allows for the gentle removal of the remaining solvent without compromising the intricate pore structure of the gel.Next, the dried gels undergo a crucial transformation-carbonization.This process involves heating the gel to a high temperature (600 • C) under a controlled nitrogen atmosphere, following a ramp rate of 1 • C/min to ensure uniform conversion.The high temperature treatment transforms the organic polymer network into a carbon framework, retaining the desired porous structure.Further, the carbonized gel undergoes activation by intimately mixing it with potassium hydroxide (KOH) in a specific ratio (1:2), and then subjecting it to another heating program (with a ramp rate of 3 • C/min up to 800 • C and holding it at this temperature for 1 h) in a tube furnace under a nitrogen flow.The sample obtained after activation was repeatedly washed with a diluted hydrochloric acid solution (1 M HCl) followed by deionized water.This washing step removes any residual KOH and ensures the final PBZGC has a neutral pH, indicating the absence of any acidic or basic impurities.The PBZGC sample was finally dried at 110 • C for 12 h for the removal of any residual moisture.This intricate multi-step process, with its precisely controlled parameters, allows for the creation of PBZGC with specific properties tailored to various advanced material applications.
FT-IR Spectroscopy
The FT-IR spectrum of the EEd-Bzo benzoxazine monomer, depicted in Figure 1, reveals several characteristic bands indicative of its molecular structure.A notable band at 939 cm −1 corresponds to the -CH 2 stretching vibrations within the oxazine ring.The C-O-C asymmetric and symmetric stretching vibrations appear as distinct peaks at 1232 and 1224 cm −1 , respectively.Additionally, the C-N-C stretching vibrations manifest as peaks at 1122 and 1093 cm −1 .A band at 1268 cm −1 is attributed to the methoxy group stretching vibrations, while the tetra-substituted benzene ring gives rise to a band at 1365 cm −1 .Aliphatic C-H stretching vibrations are identified at 2953 and 2854 cm −1 , whereas the amine group's -NH stretching vibrations are observed at 3004 cm −1 .These spectral features collectively confirm the successful formation of the benzoxazine monomer [24,25].
subjecting it to another heating program (with a ramp rate of 3 °C/min up to 800 °C holding it at this temperature for 1 h) in a tube furnace under a nitrogen flow.The sam obtained after activation was repeatedly washed with a diluted hydrochloric acid solut (1 M HCl) followed by deionized water.This washing step removes any residual K and ensures the final PBZGC has a neutral pH, indicating the absence of any acidi basic impurities.The PBZGC sample was finally dried at 110 °C for 12 h for the remo of any residual moisture.This intricate multi-step process, with its precisely contro parameters, allows for the creation of PBZGC with specific properties tailored to vari advanced material applications.
FT-IR Spectroscopy
The FT-IR spectrum of the EEd-Bzo benzoxazine monomer, depicted in Figure 1 veals several characteristic bands indicative of its molecular structure.A notable band 939 cm −1 corresponds to the -CH2 stretching vibrations within the oxazine ring.The C-C asymmetric and symmetric stretching vibrations appear as distinct peaks at 1232 1224 cm −1 , respectively.Additionally, the C-N-C stretching vibrations manifest as pe at 1122 and 1093 cm −1 .A band at 1268 cm −1 is a ributed to the methoxy group stretch vibrations, while the tetra-substituted benzene ring gives rise to a band at 1365 cm −1 .phatic C-H stretching vibrations are identified at 2953 and 2854 cm −1 , whereas the am group's -NH stretching vibrations are observed at 3004 cm −1 .These spectral features lectively confirm the successful formation of the benzoxazine monomer [24,25].
NMR Spectroscopy
The structure of the benzoxazine monomer is further validated using NMR spect copy.Figure 1b,c illustrate the 1 H-NMR and 13 C-NMR spectra, respectively.The 1 H-N
NMR Spectroscopy
The structure of the benzoxazine monomer is further validated using NMR spectroscopy.Figure 1b,c illustrate the 1 H-NMR and 13 C-NMR spectra, respectively.The 1 H-NMR spectrum features two singlets at 4.8 and 3.9 ppm, corresponding to the oxazine ring protons O-CH 2 -N and N-CH 2 -Ph.The amine group's protons produce a peak at 2.8 ppm, while the methoxy group's protons resonate at 3.7 ppm.Additionally, the methyl protons of the ethylamine group appear at 3.2 ppm, and the allyl protons are observed at 5.0 and 5.9 ppm.The aromatic protons are detected at 6.3 and 6.6 ppm.In the 13 Overall, the detailed spectroscopic analysis via FT-IR and NMR provides robust evidence for the structural integrity and successful synthesis of the EEd-Bzo monomer.The presence of characteristic vibrational bands and chemical shifts specific to the oxazine ring and other functional groups confirm the anticipated molecular framework.This comprehensive spectroscopic validation underscores the reliability of the synthetic process and the structural formation of the benzoxazine monomer.
Unveiling a Lower-Temperature Polymerization Pathway for EEd-Bzo
Differential Scanning Calorimetric (DSC) analysis sheds light on the unique polymerization process of EEd-Bzo (Figure 1d).The presence of a distinct exothermic peak at 148 • C signifies the ring-opening polymerization of the benzoxazine unit within EEd-Bzo.This curing temperature stands out compared to traditional mono-benzoxazines like P-A (phenol-aniline), which typically require significantly higher temperatures (around 255 • C) to initiate polymerization.Therefore, the DSC analysis reveals a fascinating aspect of EEd-Bzo: its ability to polymerize at a considerably lower temperature compared to conventional mono-benzoxazines.This behavior can be primarily attributed to the -NH 2 group, which acts as a bridge and facilitates robust crosslinking interactions with the methoxy and amine groups, thereby accelerating the polymerization process.These interactions significantly enhance the efficiency of benzoxazine ring-opening polymerization.A deeper understanding of these interactions and their impact on the polymerization behavior paves the way for the development of novel materials with precisely controlled properties [24,26].
Characterization of Carbon Structure and Graphitization
Raman spectroscopy and X-ray diffraction (XRD) were used to analyze the structure and graphitic properties of carbon samples.The Raman spectra showed two main peaks corresponding to disordered (D band) and graphitic (G band) carbon.The PBZC sample had a stronger D band, indicating more defects compared to PBZGC (Figure 2a).The intensity ratio of the D and G bands (I D /I G ) is a measure of the graphitic quality of the carbon.The I D /I G ratios were found to be 0.97 for PBZC and 0.84 for PBZGC.This suggests that the aerogel method and chemical activation had a minimal impact on the overall graphitic character, despite their similar chemical compositions.The lower I D /I G ratio for PBZGC implies a higher degree of graphitization and reduced disorder, compared to PBZC.XRD analysis confirmed the presence of graphitic domains in both samples (Figure 2b).The calculated d-spacing for the (002) plane in the carbon aerogel was 0.40 nm, larger than that of traditional graphite.This increased d-spacing could potentially enhance supercapacitor performance.Raman spectroscopy and XRD provided valuable insights into the structural and graphitic properties of the carbon samples.PBZGC exhibited a higher degree of graphitization compared to PBZC, while both materials possessed graphitic domains with an expanded interlayer spacing, potentially beneficial for supercapacitors [12,16,19].
Porosity Characterization of Synthesized Carbon Materials
Nitrogen sorption analysis probed the porosity and texture of the synthesized samples.Brunauer-Emme -Teller (BET) theory and non-localized density functional theory (NLDFT) revealed specific surface area and pore-size distribution (PSD).Figure 2c showcases the nitrogen adsorption behavior.Interestingly, the isotherms exhibited a combination of type I and IV characteristics.The sharp rise at low relative pressures (P/P0) suggests the presence of micropores in PBZGC, particularly evident compared to PBZC.Furthermore, both samples displayed hysteresis loops throughout the P/P0 range, indicative of mesopores with consistent size.Figure 2d confirms the presence of mesopores (20-50 Å) in both samples.Notably, PBZC (calcination derived) possessed a broader mesopore distribution compared to the narrower and higher volume distribution observed in PBZGC (aerogel derived).BET analysis yielded specific surface areas of 576 m 2 g −1 for PBZC and 843 m 2 g −1 for PBZGC.The combination of micropores, high surface area, and well-defined mesopores makes PBZC and PBZGC most suitable to be used as electrode materials for supercapacitor applications.This optimized microstructure facilitates efficient electrochemical processes-a critical aspect for next-generation energy storage devices [28,29].
6. Unveiling the Chemical Landscape: XPS Analysis of Nitrogen-Doped Carbon Aerogels X-ray Photoelectron Spectroscopy (XPS) served as a powerful tool to decipher the interplay between nitrogen and oxygen species on the surfaces of the fabricated carbon aerogels.Figures 3 and 4 showcase the XPS spectra of the synthesized carbon samples-PBZC and PBZGC-revealing distinct peaks for carbon (C 1s), nitrogen (N 1s), and oxygen (O 1s).These signatures strongly confirm the successful incorporation of nitrogen and oxygen into the carbon frameworks, as expected when using benzoxazine monomers as the starting material.Notably, the XPS analysis also verifies the absence of any contaminants in the self-doped carbon materials.Figure 3 depicts a broad XPS survey for both samples, highlighting the presence of carbon, nitrogen, and oxygen at characteristic binding
Porosity Characterization of Synthesized Carbon Materials
Nitrogen sorption analysis probed the porosity and texture of the synthesized samples.Brunauer-Emmett-Teller (BET) theory and non-localized density functional theory (NLDFT) revealed specific surface area and pore-size distribution (PSD).Figure 2c showcases the nitrogen adsorption behavior.Interestingly, the isotherms exhibited a combination of type I and IV characteristics.The sharp rise at low relative pressures (P/P 0 ) suggests the presence of micropores in PBZGC, particularly evident compared to PBZC.Furthermore, both samples displayed hysteresis loops throughout the P/P 0 range, indicative of mesopores with consistent size.Figure 2d confirms the presence of mesopores (20-50 Å) in both samples.Notably, PBZC (calcination derived) possessed a broader mesopore distribution compared to the narrower and higher volume distribution observed in PBZGC (aerogel derived).BET analysis yielded specific surface areas of 576 m 2 g −1 for PBZC and 843 m 2 g −1 for PBZGC.The combination of micropores, high surface area, and well-defined mesopores makes PBZC and PBZGC most suitable to be used as electrode materials for supercapacitor applications.This optimized microstructure facilitates efficient electrochemical processes-a critical aspect for next-generation energy storage devices [28,29].
Unveiling the Chemical Landscape: XPS Analysis of Nitrogen-Doped Carbon Aerogels
X-ray Photoelectron Spectroscopy (XPS) served as a powerful tool to decipher the interplay between nitrogen and oxygen species on the surfaces of the fabricated carbon aerogels.Figures 3 and 4 showcase the XPS spectra of the synthesized carbon samples-PBZC and PBZGC-revealing distinct peaks for carbon (C 1s), nitrogen (N 1s), and oxygen (O 1s).These signatures strongly confirm the successful incorporation of nitrogen and oxygen into the carbon frameworks, as expected when using benzoxazine monomers as the starting material.Notably, the XPS analysis also verifies the absence of any contaminants in the self-doped carbon materials.Figure 3 depicts a broad XPS survey for both samples, highlighting the presence of carbon, nitrogen, and oxygen at characteristic binding energies of approximately 285.2, 400.1, and 532.3 eV, respectively.A deeper dive into the specific chemical environments is presented in Figure 4 Deconvolution of the C 1s peak (Figure 4a) unveiled four distinct contributions: C=C/C-C hydrocarbon chains (284.8 eV), C-N bonds (285.6 eV), O-C=O/C=N/C-OH groups (286.5 eV), and carbon atoms bonded with oxygen and nitrogen in HN-C=O groups (288.8 eV).This detailed breakdown provides valuable insights into the various carbon bonding configurations within the aerogel structure.The N 1s region (Figure 4b) revealed three distinct peaks to different types of nitrogen functionalities: pyrrolic N (398.4eV), graphitic N (400.6 eV), and pyridine N-oxide (406.3 eV).This variety of nitrogen incorporation suggests a complex yet potentially beneficial chemical environment for specific applications.The O 1s spectrum (Figure 4c) further enriches the picture by displaying four peaks at binding energies of 531.5, 533.1, 533.6, and 537.2 eV, respectively, a ributed to hydroxyl (C-OH), epoxy (C-O-C), carbonyl (C=O)/carboxyl (COO), and chemisorbed oxygen or water groups [33][34][35].This confirms the presence of oxygencontaining functional groups and potentially entrapped water molecules within the carbon matrix.Similar observations could be observed for PBZGC with a slight shift in the deconvoluted peaks of C 1s, N 1s, and O 1s.This further underscores the presence of Deconvolution of the C 1s peak (Figure 4a) unveiled four distinct contributions: C=C/C-C hydrocarbon chains (284.8 eV), C-N bonds (285.6 eV), O-C=O/C=N/C-OH groups (286.5 eV), and carbon atoms bonded with oxygen and nitrogen in HN-C=O groups (288.8 eV).This detailed breakdown provides valuable insights into the various carbon bonding configurations within the aerogel structure.The N 1s region (Figure 4b) revealed three distinct peaks corresponding to different types of nitrogen functionalities: pyrrolic N (398.4eV), graphitic N (400.6 eV), and pyridine N-oxide (406.3 eV).This variety of nitrogen incorporation suggests a complex yet potentially beneficial chemical environment for specific applications.The O 1s spectrum (Figure 4c) further enriches the picture by displaying four peaks at binding energies of 531.5, 533.1, 533.6, and 537.2 eV, respectively, a ributed to hydroxyl (C-OH), epoxy (C-O-C), carbonyl (C=O)/carboxyl (COO), and chemisorbed oxygen or water groups [33][34][35].This confirms the presence of oxygencontaining functional groups and potentially entrapped water molecules within the carbon matrix.Similar observations could be observed for PBZGC with a slight shift in the deconvoluted peaks of C 1s, N 1s, and O 1s.This further underscores the presence of This detailed breakdown provides valuable insights into the various carbon bonding configurations within the aerogel structure.The N 1s region (Figure 4b) revealed three distinct peaks corresponding to different types of nitrogen functionalities: pyrrolic N (398.4eV), graphitic N (400.6 eV), and pyridine N-oxide (406.3 eV).This variety of nitrogen incorporation suggests a complex yet potentially beneficial chemical environment for specific applications.The O 1s spectrum (Figure 4c) further enriches the picture by displaying four peaks at binding energies of 531.5, 533.1, 533.6, and 537.2 eV, respectively, attributed to hydroxyl (C-OH), epoxy (C-O-C), carbonyl (C=O)/carboxyl (COO), and chemisorbed oxygen or water groups [33][34][35].This confirms the presence of oxygen-containing functional groups and potentially entrapped water molecules within the carbon matrix.Similar observations could be observed for PBZGC with a slight shift in the deconvoluted peaks of C 1s, N 1s, and O 1s.This further underscores the presence of diverse nitrogen bondingconfigurations within the PBZGC.In addition, the XPS analysis serves as a compelling validation of the successful nitrogen doping within the synthesized carbon materials.The observed higher nitrogen content compared to reference materials like APFC-N suggests a promising avenue for enhanced performance in applications like supercapacitors.This comprehensive XPS characterization offers valuable insights into the chemical states and interactions within the activated and aerogel-based carbons, paving the way for their exploration in energy storage technologies.
Contrasting Morphologies of the Carbon Materials
Scanning electron microscopy (SEM) revealed distinct textural differences between PBZC and PBZGC, impacting their potential for supercapacitor applications.PBZC exhibited an irregular microparticle structure with particle sizes ranging from hundreds of nanometers to a few micrometers.These particles clumped together to form large, porous clusters with a spreading of flaky particles in some regions, and numerous balls in another region (Figure 5a-c).Notably, PBZC lacks visible pores, displaying a rough and continuous surface.This absence of porosity translated into a limited surface area, potentially hindering its supercapacitor performance.In contrast, PBZGC, derived from activated carbon aerogel, boasted a dramatically transformed surface morphology.SEM images unveiled a network of micropores and mesopores scattered across the PBZGC surface, with minimal presence of larger macropores.These macropores, roughly around 0.5 mm in diameter (Figure 5d-f), likely arose from the expulsion of residual solvent during gelation.This extensive network of pores is highly beneficial for supercapacitors as it significantly expands the surface area, providing more sites for interaction with electrolyte ions and, consequently, enhancing capacitance.
Gels 2024, 10, x FOR PEER REVIEW 9 of 16 diverse nitrogen bonding-configurations within the PBZGC.In addition, the XPS analysis serves as a compelling validation of the successful nitrogen doping within the synthesized carbon materials.The observed higher nitrogen content compared to reference materials like APFC-N suggests a promising avenue for enhanced performance in applications like supercapacitors.This comprehensive XPS characterization offers valuable insights into the chemical states and interactions within the activated and aerogel-based carbons, paving the way for their exploration in energy storage technologies.
Contrasting Morphologies of the Carbon Materials
Scanning electron microscopy (SEM) revealed distinct textural differences between PBZC and PBZGC, impacting their potential for supercapacitor applications.PBZC exhibited an irregular microparticle structure with particle sizes ranging from hundreds of nanometers to a few micrometers.These particles clumped together to form large, porous clusters with a spreading of flaky particles in some regions, and numerous balls in another region (Figure 5a-c).Notably, PBZC lacks visible pores, displaying a rough and continuous surface.This absence of porosity translated into a limited surface area, potentially hindering its supercapacitor performance.In contrast, PBZGC, derived from activated carbon aerogel, boasted a dramatically transformed surface morphology.SEM images unveiled a network of micropores and mesopores sca ered across the PBZGC surface, with minimal presence of larger macropores.These macropores, roughly around 0.5 mm in diameter (Figure 5d-f), likely arose from the expulsion of residual solvent during gelation.This extensive network of pores is highly beneficial for supercapacitors as it significantly expands the surface area, providing more sites for interaction with electrolyte ions and, consequently, enhancing capacitance.Transmission electron microscopy (TEM) offered a closer look at PBZGC's internal structure.The high-resolution images (Figure 6a-d) reveal an open-pore network, a crucial characteristic for optimal supercapacitor performance.This nano-architecture offers several advantages: it shortens the diffusion pathways for ions, enabling their rapid movement within the electrode, and it guarantees a continuous path for electrons, thereby improving electrical conductivity.Additionally, PBZGC exhibited a denser network of pores with a more ordered arrangement, suggesting the presence of sp 2 -bonded carbon.This is particularly advantageous because sp 2 -bonded carbon is renowned for its exceptional electrical conductivity.Transmission electron microscopy (TEM) offered a closer look at PBZGC's internal structure.The high-resolution images (Figure 6a-d) reveal an open-pore network, a crucial characteristic for optimal supercapacitor performance.This nano-architecture offers several advantages: it shortens the diffusion pathways for ions, enabling their rapid movement within the electrode, and it guarantees a continuous path for electrons, thereby improving electrical conductivity.Additionally, PBZGC exhibited a denser network of pores with a more ordered arrangement, suggesting the presence of sp 2 -bonded carbon.This is particularly advantageous because sp 2 -bonded carbon is renowned for its exceptional electrical conductivity.
In essence, SEM and TEM analyses unveiled significant discrepancies between PBZC and PBZGC.PBZC's limited porosity and irregular structure suggest it may be less suitable for supercapacitor applications.Conversely, PBZGC's highly porous and well-ordered structure, coupled with the presence of sp 2 -bonded carbon, implies superior performance due to its increased surface area and enhanced electrical conductivity.These morpho-logical characteristics are paramount for optimizing the efficiency and performance of supercapacitor electrodes.In essence, SEM and TEM analyses unveiled significant discrepancies between PBZC and PBZGC.PBZC's limited porosity and irregular structure suggest it may be less suitable for supercapacitor applications.Conversely, PBZGC's highly porous and well-ordered structure, coupled with the presence of sp 2 -bonded carbon, implies superior performance due to its increased surface area and enhanced electrical conductivity.These morphological characteristics are paramount for optimizing the efficiency and performance of supercapacitor electrodes.
Nitrogen-Rich Porous Carbons for Supercapacitors: PBZC and PBZGC
The prepared materials-PBZC and PBZGC-are nitrogen-rich porous carbons with unique microstructures, which makes them ideal to be used as supercapacitor electrodes.A comprehensive electrochemical analysis using a three-electrode system with sulfuric acid as an electrolyte was conducted to assess their potential.Cyclic voltammetry (CV) is a key technique used to examine the capacitive properties of these electrodes.As shown in Figures 7a and 8a, the CV curves for both PBZC and PBZGC exhibit a near-rectangular shape, even at high scan rates, indicating excellent capacitive behavior.Notably, the larger CV curve area for PBZGC suggests superior charge storage and release capabilities, compared to PBZC.Further insights into charge storage and delivery were obtained through galvanostatic charge-discharge (GCD) measurements.The GCD curves for PBZC and PBZGC at different current densities are displayed in Figures 7b and 8b, respectively.They show near-triangular shaped GCD curves, with minimal voltage drops during charge and discharge, indicating efficient charge storage.Slight deviations from ideal triangular shapes suggest some pseudocapacitive behavior, likely due to the presence of nitrogen atoms confirmed by X-ray photoelectron spectroscopy (XPS).These nitrogen functionalities enhance pseudocapacitance by creating an electrochemically active interface with the electrolyte ions [36][37][38].
Electrochemical Studies
Nitrogen-Rich Porous Carbons for Supercapacitors: PBZC and PBZGC The prepared materials-PBZC and PBZGC-are nitrogen-rich porous carbons with unique microstructures, which makes them ideal to be used as supercapacitor electrodes.A comprehensive electrochemical analysis using a three-electrode system with sulfuric acid as an electrolyte was conducted to assess their potential.Cyclic voltammetry (CV) is a key technique used to examine the capacitive properties of these electrodes.As shown in Figures 7a and 8a, the CV curves for both PBZC and PBZGC exhibit a near-rectangular shape, even at high scan rates, indicating excellent capacitive behavior.Notably, the larger CV curve area for PBZGC suggests superior charge storage and release capabilities, compared to PBZC.Further insights into charge storage and delivery were obtained through galvanostatic charge-discharge (GCD) measurements.The GCD curves for PBZC and PBZGC at different current densities are displayed in Figures 7b and 8b, respectively.They show near-triangular shaped GCD curves, with minimal voltage drops during charge and discharge, indicating efficient charge storage.Slight deviations from ideal triangular shapes suggest some pseudocapacitive behavior, likely due to the presence of nitrogen atoms confirmed by X-ray photoelectron spectroscopy (XPS).These nitrogen functionalities enhance pseudocapacitance by creating an electrochemically active interface with the electrolyte ions [36][37][38].
Specific capacitance, a crucial performance metric for supercapacitors, is influenced by current density.As evident in Figures 7c and 8c, the specific capacitance (C s ) at a current density of 0.5 A g −1 was found to be 90.5 F g −1 for PBZC and 132 F g −1 for PBZGC.Moreover, the specific capacitance of both the electrodes decreases with increasing current density.This decrease highlights the limitations of the electrode materials at high charging and discharging rates.The electrochemical analysis revealed superior capacitive behavior and charge storage efficiency for PBZGC, compared to PBZC.Nitrogen functionalities play a significant role in enhancing pseudocapacitance, but further optimization is required to improve the rate capabilities of these materials for practical supercapacitor applications.Specific capacitance, a crucial performance metric for supercapacitors, is influenced by current density.As evident in Figures 7c and 8c, the specific capacitance (Cs) at a current density of 0.5 A g −1 was found to be 90.5 F g −1 for PBZC and 132 F g −1 for PBZGC.Specific capacitance, a crucial performance metric for supercapacitors, is influenced by current density.As evident in Figures 7c and 8c, the specific capacitance (Cs) at a current density of 0.5 A g −1 was found to be 90.5 F g −1 for PBZC and 132 F g −1 for PBZGC.Electrochemical impedance spectroscopy (EIS) stands as a crucial tool in investigating the complexities of electrode-electrolyte interfaces within supercapacitors, offering profound insights into their operational dynamics.This technique delves into key parameters such as interfacial dynamics, diffusion kinetics, electronic conductivity, and charge-transfer resistance (R ct ), all pivotal in determining the overall performance of these energy storage devices.Figures 7d and 8d present Nyquist plots depicting the impedance characteristics of PBZC and PBZGC electrodes, respectively, integral to their electrochemical characterization.These plots are instrumental in discerning critical parameters: the intercept on the real axis signifies the solution resistance (R s ), while the semicircle diameter reflects R ct .Notably, PBZGC exhibits a significantly lower R ct of 2.1 Ω compared to PBZC 3.7 Ω, indicating superior electronic conductivity crucial for enhancing its rate capability, a fundamental attribute for high-performance supercapacitors [39][40][41][42].
The diminished performance observed in PBZC is attributed to its inadequate porosity, resulting in reduced surface area and uneven distribution of nitrogen atoms within the material.In contrast, PBZGC, derived from biomass and enriched with nitrogen content, emerges as a promising alternative electrode material.Its nearly rectangular cyclic voltammetry (CV) curves, triangular galvanostatic charge-discharge (GCD) profiles, and lower R s and R ct values underscore its exceptional capacitive behavior and robust rate capability.Further insights from Figure 9a-d provide a detailed comparative analysis of the electrode materials.The CV curve of PBZGC at 50 mV s −1 exhibits a significantly larger area compared to PBZC, indicative of higher capacitance.Similarly, GCD curves at 0.5 A g −1 illustrate PBZGC's longer discharge time, highlighting its superior charge storage and release capabilities.The superior performance of PBZGC can be attributed to its unique microporous structure, complemented by micro-, meso-, and macropores.Micropores play a crucial role by offering a large surface area essential for efficient ion adsorption, thereby contributing to pseudocapacitance.Meso-and macropores facilitate uniform infiltration of electrolyte throughout the electrode, maximizing the accessible surface area for enhanced ionic interactions.The synergy among these pore sizes and their distribution is pivotal in achieving high capacitance and superior electrochemical performance [43][44][45].The findings presented in Figure 9c demonstrate a substantial enhancement in specific capacitance (Cs) when incorporating a gel network into the PBZGC electrode.At a current density of 0.5 A g −1 , the PBZC electrode achieves a Cs of 90.5 F g −1 , while the PBZGC electrode remarkably achieves 132 F g −1 .This significant improvement is directly a ributed to the optimized porous structure of PBZGC.The interconnected network of pores facilitates efficient electrolyte penetration, thereby maximizing the electrode's surface area available for ion interaction.This structural advantage fulfills a critical requirement for achieving high capacitance, firmly establishing PBZGC as a highly promising The findings presented in Figure 9c demonstrate a substantial enhancement in specific capacitance (C s ) when incorporating a gel network into the PBZGC electrode.At a current density of 0.5 A g −1 , the PBZC electrode achieves a Cs of 90.5 F g −1 , while the PBZGC electrode remarkably achieves 132 F g −1 .This significant improvement is directly attributed to the optimized porous structure of PBZGC.The interconnected network of pores facilitates efficient electrolyte penetration, thereby maximizing the electrode's surface area available for ion interaction.This structural advantage fulfills a critical requirement for achieving high capacitance, firmly establishing PBZGC as a highly promising material for supercapacitor applications.To further evaluate the practical feasibility of PBZGC electrodes, their cycling stability was assessed through a Galvanostatic Charge-Discharge (GCD) study over 5000 cycles (Figure 10).Notably, the C s of the PBZGC electrode demonstrated exceptional stability throughout the extensive cycling test at a current density of 0.5 A g −1 .Despite a slight decrease in specific capacitance from 132 to 124 F g −1 after 5000 cycles, the capacitance retention ratio remained impressive at 94%.This outstanding stability underscores the long-term durability of PBZGC electrodes, demonstrating their suitability for real-world applications in supercapacitor devices.
Conclusions
We developed a new method to create activated porous carbons containing heteroatoms (PBZC and PBZGC) from polybenzoxazine.This innovative approach avoids the use of complex templates and proceeds through a simpler and more cost-effective process.The key lies in using polybenzoxazine as a precursor.This allows the creation of a carbon aerogel using eco-friendly dimethyl sulfoxide as a solvent.The resulting aerogel has an optimized pore-size distribution, making it ideal for electrochemical applications, especially as supercapacitor electrodes.This method not only simplifies production but also enhances the functionality of the carbon aerogel.Notably, the research highlights the advantages of the biomass-derived PBZC electrode compared to PBZGC.The hierarchical pore structure and the presence of heteroatoms in PBZGC contribute to its superior performance.The optimized pores allow for be er electrolyte penetration, and the high surface area (843 m 2 g −1 ) of PBZGC translates to significant improvements in capacitance and cycling stability.These features make PBZGC a promising candidate for high-performance supercapacitors.The well-designed structure, featuring a mix of micropores, mesopores, and macropores facilitates efficient ion transport and storage.This, combined with the heteroatoms, boosts capacitance (132 F g −1 ) and ensures remarkable stability (94% capacitance retention) over extended use.These characteristics are crucial for practical supercapacitor applications.This research demonstrates the potential of bio-based materials for energy storage technologies.By using sustainable resources like benzoxazine, carbon materials containing hetero-atoms were produced, which enhances the capacitance by producing pseudocapacitance in addition to EDLC.This work paves the way for nextgeneration supercapacitors, while highlighting the potential of eco-friendly approaches in materials science.Further exploration and optimization of these materials hold promise for continued advancements in sustainable energy storage solutions.
Conclusions
We developed a new method to create activated porous carbons containing heteroatoms (PBZC and PBZGC) from polybenzoxazine.This innovative approach avoids the use of complex templates and proceeds through a simpler and more cost-effective process.The key lies in using polybenzoxazine as a precursor.This allows the creation of a carbon aerogel using eco-friendly dimethyl sulfoxide as a solvent.The resulting aerogel has an optimized pore-size distribution, making it ideal for electrochemical applications, especially as supercapacitor electrodes.This method not only simplifies production but also enhances the functionality of the carbon aerogel.Notably, the research highlights the advantages of the biomass-derived PBZC electrode compared to PBZGC.The hierarchical pore structure and the presence of heteroatoms in PBZGC contribute to its superior performance.The optimized pores allow for better electrolyte penetration, and the high surface area (843 m 2 g −1 ) of PBZGC translates to significant improvements in capacitance and cycling stability.These features make PBZGC a promising candidate for high-performance supercapacitors.The well-designed structure, featuring a mix of micropores, mesopores, and macropores facilitates efficient ion transport and storage.This, combined with the heteroatoms, boosts capacitance (132 F g −1 ) and ensures remarkable stability (94% capacitance retention) over extended use.These characteristics are crucial for practical supercapacitor applications.This research demonstrates the potential of bio-based materials for energy storage technologies.By using sustainable resources like benzoxazine, carbon materials containing hetero-atoms were produced, which enhances the capacitance by producing pseudocapacitance in addition to EDLC.This work paves the way for next-generation supercapacitors, while highlighting the potential of eco-friendly approaches in materials science.Further exploration and optimization of these materials hold promise for continued advancements in sustainable energy storage solutions.
Scheme 1 .
Scheme 1. Schematic illustration showing the preparation process of EEd-Bzo monomer (Step i) and PBz-based carbon aerogel (Step ii).
Scheme 1 .
Scheme 1. Schematic illustration showing the preparation process of EEd-Bzo monomer (Step i) and PBz-based carbon aerogel (Step ii).
C-NMR spectrum, the oxazine ring carbons exhibit signals at 82.3 ppm for O-C-N and at 55.6 ppm for N-C-Ph carbons.The methylene carbons produce signals at 49.6 and 40.1 ppm, while the allyl carbons show signals at 138 and 115 ppm.The aromatic carbons resonate between 110 and 147 ppm.The peaks and signals observed in both FT-IR and NMR spectra collectively corroborate the formation of the benzoxazine ring, thereby confirming the successful synthesis of the benzoxazine monomer [24-27].
through a detailed examination of the C 1s, N 1s, and O 1s signals [30-32].Gels 2024, 10, x FOR PEER REVIEW 8 of 16 energies of approximately 285.2, 400.1, and 532.3 eV, respectively.A deeper dive into the specific chemical environments is presented in Figure 4 through a detailed examination of the C 1s, N 1s, and O 1s signals [30-32].
Figure 4 .
Figure 4. De-convoluted XPS spectra of PBZC and PBZGC (a) C 1s; (b) N 1s; and (c) O 1s.Deconvolution of the C 1s peak (Figure4a) unveiled four distinct contributions: C=C/C-C hydrocarbon chains (284.8 eV), C-N bonds (285.6 eV), O-C=O/C=N/C-OH groups (286.5 eV), and carbon atoms bonded with oxygen and nitrogen in HN-C=O groups (288.8 eV).This detailed breakdown provides valuable insights into the various carbon bonding configurations within the aerogel structure.The N 1s region (Figure4b) revealed three distinct peaks corresponding to different types of nitrogen functionalities: pyrrolic N (398.4eV), graphitic N (400.6 eV), and pyridine N-oxide (406.3 eV).This variety of nitrogen incorporation suggests a complex yet potentially beneficial chemical environment for specific applications.The O 1s spectrum (Figure4c) further enriches the picture by displaying four peaks at binding energies of 531.5, 533.1, 533.6, and 537.2 eV, respectively, attributed to hydroxyl (C-OH), epoxy (C-O-C), carbonyl (C=O)/carboxyl (COO), and chemisorbed oxygen or water groups[33][34][35].This confirms the presence of oxygen-containing functional groups and potentially entrapped water molecules within the carbon matrix.Similar observations could be observed for PBZGC with a slight shift in the deconvoluted peaks of C 1s, N 1s, and O 1s.This further underscores the presence of diverse nitrogen bondingconfigurations within the PBZGC.In addition, the XPS analysis serves as a compelling validation of the successful nitrogen doping within the synthesized carbon materials.The observed higher nitrogen content compared to reference materials like APFC-N suggests a promising avenue for enhanced performance in applications like supercapacitors.This
Figure 10 .
Figure 10.Cyclic stability of PBZGC showing capacitance retention and coulombic efficiency for 5000 cycles. | 10,530.8 | 2024-08-01T00:00:00.000 | [
"Materials Science",
"Chemistry",
"Environmental Science"
] |
An Expert Diagnosis System for Parkinson Disease Based on Genetic Algorithm-Wavelet Kernel-Extreme Learning Machine
Parkinson disease is a major public health problem all around the world. This paper proposes an expert disease diagnosis system for Parkinson disease based on genetic algorithm- (GA-) wavelet kernel- (WK-) Extreme Learning Machines (ELM). The classifier used in this paper is single layer neural network (SLNN) and it is trained by the ELM learning method. The Parkinson disease datasets are obtained from the UCI machine learning database. In wavelet kernel-Extreme Learning Machine (WK-ELM) structure, there are three adjustable parameters of wavelet kernel. These parameters and the numbers of hidden neurons play a major role in the performance of ELM. In this study, the optimum values of these parameters and the numbers of hidden neurons of ELM were obtained by using a genetic algorithm (GA). The performance of the proposed GA-WK-ELM method is evaluated using statical methods such as classification accuracy, sensitivity and specificity analysis, and ROC curves. The calculated highest classification accuracy of the proposed GA-WK-ELM method is found as 96.81%.
Introduction
Parkinson disease (PD) is a degenerative disorder of the central nervous system. It results from the death of dopamine generating cells in the substantia nigra, a region of the midbrain. This disease affects about 1% of the world population over the age of 55 [1,2].
In advanced stages of the disease, nonmotor features, such as dementia and dysautonomia, occur frequently [3]. PD is diagnosed in case of presence of two or more cardinal motor features such as rest tremor, bradykinesia, or rigidity [4]. Functional neuroimaging holds the promise of improved diagnosis and allows assessment in early disease [5].
The main symptoms of PD are bradykinesia, tremor, rigidity, and postural instability. When all of these symptoms are seen, then the person can be diagnosed by doctors with Parkinson disease. The dysphonia is considered to be one of the most difficult aspects of Parkinson disease by many patients and their families. Nearly 9 out of 10 people with PD have a speech or voice disorder. Dysphonic symptoms typically include reduced loudness, breathiness, roughness, decreased energy in the higher parts of the harmonic spectrum, and exaggerated vocal tremor and these symptoms can be detected using many different vocal tests [6]. In the design of an automatic diagnosis system for PD, it is more suitable to use the voice data because it is one of the most common symptoms. In literature, there are many studies on speech measurement for general voice disorders [6][7][8][9][10][11][12]. In these studies, the speech signals are recorded and then these signals are detected by means of different methods certain properties of these signals. Then, a classifier is used to diagnose patients with PD from certain properties of signal. The classifier is the heart of the automatic diagnosis system. The reliable classifier should diagnose the disease at as high accuracy as possible even though there are many uncontrolled variations. In literature, different classifiers have been proposed for automatic diagnosis of PD. The NNs and adaptive neurofuzzy classifier with linguistic hedges (ANFIS-LH) are investigated for automatic diagnosis of PD in [13]. The performance of probabilistic neural network (PNN) for automatic diagnosis of PD is evaluated in [14]. SVM classifier is also investigated for the same goal in [15]. NNs have some drawbacks such as need of long training time and uncertainties in activation function to be used in hidden layer, number of cells in hidden layer, and the number of hidden layer. In case of SVM, type of kernel function and penalty constant and so forth affects the classification performance. If these parameters are not appropriately selected, the classification performance of SVM degrades. Similarly, the performance of ANFIS depends on type and parameters of membership function and output linear parameters.
Among these classifiers, NNs have been widely used in pattern recognition and regression. The NNs are commonly trained by backpropagation based on a gradient-based learning rule [16]. Up to now, the gradient-based learning methods have been widely applied for learning of NNs [17,18]. However, they have several shortcomings such as difficult setting of learning parameters, slow convergence, training failures due to local minima, and repetitive learning to improve performance of NNs. Also, it is clear that gradient descent-based learning methods are generally very slow [20].
Recently, the Extreme Learning Machine (ELM) proposed by Huang et al. has been widely used in classification regression problems because of its properties fast learning capability and fast learning. Although output weights are analytically calculated, there is no rule in determination of number of hidden neurons, activation function. The ELM may not provide high classification performance because of cases mentioned above.
In [6], the GA is used for selection of feature subset for the input of ANN. The proposed method is not suitable for realtime implementation. Besides, the feature vector is randomly reduced to a lower dimension in [3,6,7,9]. ANFIS structure might not have a good performance if a huge amount of data exists.
Recently, a new learning algorithm called Extreme Learning Machine (ELM) which randomly selected all the hidden nodes parameters of generalized single-hidden layer feedforward networks (SLFNs) and analytically determines the output weights of SLFNs is proposed in [18][19][20]. Although output weights are analytically calculated, there is no rule in determination of number of hidden neurons and type of the kernel function. To obtain a good classification performance of ELM, these parameters should be determined properly.
This paper proposes an expert Parkinson diseases (PD) diagnosis system based on genetic algorithm-(GA-) wavelet kernel-(WK-) Extreme Learning Machines (ELM). The classifier used in this paper is single layer neural network (SLNN) and it is trained by the ELM learning method. In wavelet kernel-Extreme Learning Machine (WK-ELM) structure, there are three adjustable parameters of wavelet kernel. These parameters and the numbers of hidden neurons play a major role in the performance of ELM. Therefore, values of these parameters and numbers of hidden neurons should be tuned carefully based on the solved problem. In this study, the optimum values of these parameters and the numbers of hidden neurons of ELM were obtained by using a genetic algorithm (GA). The hepatitis disease datasets are obtained from the UCI machine learning database. The performance of the proposed GA-WK-ELM method is evaluated through statical methods such as classification accuracy, sensitivity and specificity analysis, and ROC curves. In here, the numbers of hidden neurons of ELM and parameters of wavelet kernel function are optimized by GA. In GA structure, an individual is composed of a total of 20 bits. These are as follows: (i) The first four bits (1st, 2nd, 3rd, and 4th bits) of each of these individuals represent the parameter values (between 1 and 16) of the wavelet kernel functions.
(ii) The second four bits (5th, 6th, 7th, and 8th bits) of each of these individuals represent the parameter values (between 1 and 16) of the wavelet kernel functions.
(iii) The third four bits (9th, 10th, 11th, and 12th bits) of each of these individuals represent the parameter values (between 1 and 16) of the wavelet kernel functions.
(iv) The rest of the 20 bits represent the number of hidden neurons (between 5 and 260).
The 40 number of these individuals is randomly chosen for the initial population. Thus, it is purposed to obtain the best possible performance from ELM classifier. The training and testing dataset for the proposed method is obtained from the UCI dataset. This dataset is composed of 192 pieces of data. The randomly selected 128 of 192 pieces of data are used for training of classifier whereas the remaining data is used for testing of classifier. For different kernel function and number of hidden neurons, the results of the proposed method are given. Further, a comparison is performed with previous studies to show the validity of the proposed method. From results, the proposed method is a quite powerful tool for automatic diagnosis of hepatitis and may work in real-time systems. The paper is organized as follows. Section 2 presents pattern recognition for the diagnosis of Parkinson disease. In Section 3, wavelet kernel-Extreme Learning Machines and in Section 4 genetic algorithms are briefly presented, respectively. In Section 5, application of GA-WK-ELM for the diagnosis of Parkinson disease is explained. The obtained results are given in Section 6. Finally, Section 7 provides the discussion and conclusion of this study.
Pattern Recognition for Diagnosis of Parkinson Disease
The pattern recognition for diagnosis of disease is commonly composed of two stages. They are feature extraction and classification stages. In the feature extraction stage, the useful information of data is extracted by a feature extractor. The feature extraction not only reduces the computational burden of the classifier but also improves classification performance. In classification stage, extracted features from data are given as input to the classifier. Depending on classification problem, the data is separated into two or more classes by the classifier. The pattern recognition concept used in this study is given in Figure 1. The proposed concept consists of three stages including feature extraction, classification, and optimization of classifier's parameters. These stages are explained in detail as follows.
Wavelet Kernel-Extreme Learning Machines
In literature, the neural networks have been commonly used in pattern recognition and regression problems [20,21]. The gradient-based learning and backpropagation algorithms are most commonly used methods for neural networks [17,18]. Moreover, these methods have some drawbacks such as difficult setting of learning parameters, slow convergence, slow learning, and training failures [19]. Because of these disadvantages of classic gradient-based learning and backpropagation neural network algorithms, the Extreme Learning Machine (ELM) algorithm is proposed by Cho et al. [19]. In the Extreme Machine Learning algorithm, the output weights of a single-hidden layer feedforward network (SLFN) are analytically calculated by using the Moore-Penrose (MP) generalized inverse instead of iterative learning scheme [20]. In Figure 2, the structure of a singlehidden layer feedforward network using Extreme Learning Machine algorithm is given. In here, 1 , 2 , and are weights vector connecting the th hidden neuron and the input neurons, is the weight vector connecting the th hidden neuron and output neuron, and (⋅) is the activation function.
The most significant features of ELM are ordered as below: (i) In ELM structure, the learning speed is very fast.
Because of this, single-hidden layer feedforward network can be trained by using ELM. Thus, an ELM learning method, which is faster than other classical learning methods, is obtained.
(ii) The obtaining of the less training error and the fewer norms of weights are aimed at by using ELM, because the ELM learning algorithm has good performance for neural networks.
(iii) In the structure of single-hidden layer feedforward network, the ELM learning algorithm is used with nondifferentiable activation functions.
(iv) The easy solutions are tried to get in the ELM structure [19].
The outputs of an ELM with neurons and activation function are given as below: The ELM learning algorithm has faster learning speed than classic neural networks. Moreover, it has better generalization performance than them. Nowadays, the number of researchers, who work in ELM science topic, has increased [19][20][21][22][23]. In ELM learning algorithm, the initial parameters of the hidden layer need not be tuned. In ELM algorithm, all nonlinear piecewise continuous functions are used as the hidden neurons. Therefore, for optional various samples {( , ) | ∈ , ∈ , = 1, . . . , }, the output function in ELM by using hidden neurons is is the output vector of the hidden layer with respect to the input . = [ 1 , 2 , . . . , ] is the vector of the output weights between the hidden layer of neurons and the output neuron. V vector changes the data from input space to the ELM feature space [19][20][21][22][23]. The training error and the output weights should be synchronously minimized for decreasing the training error in ELM algorithm. So, generalization performance of neural networks increases: In here, (3) can be solved by using where is the regulation coefficient, is the hidden layer output matrix, and is the expected output matrix of samples, respectively. So, the output function of the ELM learning algorithm can be given as follows: If the feature vector V( ) is unknown, the kernel matrix of ELM based on Mercer's conditions can be computed as follows: In this way, the output function ( ) of the wavelet kernel-Extreme Learning Machine (WK-ELM) can be given as below: In there, = and ( , ) is the kernel function of Extreme Learning Machine. There are some kernel functions, which are linear kernel, polynomial kernel, Gaussian kernel, and exponential kernel, appropriate for the Mercer condition in ELM literature. The readers can find more details in [21,22]. In this study, wavelet kernel function is used for simulation and performance analysis of WK-ELM: In the result of these application studies, it was observed that the training and testing performance of the wavelet kernel function shown in (8) is better than the performances of linear kernel, polynomial kernel, Gaussian kernel, and exponential classical kernel functions, respectively. The values of adjustable parameters , , and are important for training performance of ELM. Because of this, values of these parameters should be attentively tuned for solving the problem. However, the hidden layer feature mapping need not be known and the number of hidden neurons need not be chosen in WK-ELM algorithms. Moreover, the WK-ELM learning algorithm has better generalization performance than classic ELM learning algorithm. At the same time, it was shown that WK-ELM is more stable than classic ELM and is faster than Support Vector Machine (SVM) [24].
Genetic Algorithms
To solve a problem, an evolutionary process is used in the structures of genetic algorithms [25]. A genetic algorithm begins with a set of solutions which are represented by individuals. This set of solutions is known as a population. Each population is a solution set and new solutions are selected according to their fitness values. In the genetic algorithm, the iterative process is repeated as long as the new population is better than the old one. The higher the fitness value of an individual is, the more likely this individual is reproduced for the next population. The iterative process is finished when some conditions (e.g., number of individuals in the population) are satisfied [26]. The stages of the genetic algorithm are given as below.
Stage 1. A random population of individuals is created.
These individuals are a suitable solution to the problem. In here, the value of is 20.
Stage 2. The fitness ( ) of each individual is calculated in the population [25]. In these experimental studies, each of the individuals in the population is randomly formed.
Stage 3. Two parental individuals from among the individuals are selected. These individuals have the higher fitness value in the population. Then, the crossover operator is realized to these parental individuals. The aim of the crossover operator is creating the varied individuals. These have higher fitness values than former individuals.
Stage 4.
In this stage, a crossover probability is used for crossover operating to form the new individuals. If crossover is not done, the individual will be the exact copy of the parents.
Stage 5. In this stage, each new individual is obtained by mutating with a mutation probability. This mutation process is realized by using any one or more bits of the individual. Stage 8. In this stage, it is returned to Stage 2. Then, the new generated population is used for further algorithm.
Application of GA-WK-ELM for Diagnosis of Parkinson Disease
The Parkinson dataset used in this study is composed of a range of biomedical voice measurements from 31 people, 23 with Parkinson disease (PD), and it includes a total of 192 voice recordings from individuals. In addition, these biomedical voice measurements have different attribute information given in Table 1 [6][7][8][9][10][11][12].
The essential aim of processing the data is to discriminate healthy people from those with PD, according to the "status" attribute which is set to 1 for healthy people and 0 for people with PD, which is a two-decision classification problem.
The block diagram of the proposed method is given in Figure 3. As shown in the figure, the feature vector obtained from the PD dataset is applied to WK-ELM optimized with GA. The Parkinson dataset used in this study is taken from the University of California at Irvine (UCI) machine learning repository [6][7][8][9][10][11][12]. It was used for training and testing of the proposed GA-WK-ELM method. The dataset has 22 relevant features as given in Table 1 and includes a total of 192 cases. Thus, it is a matrix with dimensions of 192 × 22. Training of the GA-ELM is carried out with dataset of 128 and the remaining data is used for testing of GA-WK-ELM. To optimize the parameters of WK-ELM, GA is used. The fitness function of the GA is training error of WK-ELM classifier.
This GA-WK-ELM method for diagnosis of PD includes three layers. In the first layer of GA-WK-ELM, the Parkinson data is obtained from the UCI machine learning database mentioned in Section 5. In the second layer of GA-WK-ELM, the numbers of hidden neurons of WK-ELM and parameters of wavelet kernel function are optimized by the GA. In the GA structure, an individual has a total of 20 bits. These bits can be ordered as below: The 40 number of these individuals is randomly selected for the initial population. So, this GA structure is purposed to obtain the best possible performance from the WK-ELM The block diagram of the proposed GA-WK-ELM method is given in Figure 4. In these applications, a 3fold cross-validation schema was applied where the twofifths data were used for training the proposed GA-WK-ELM method and the other remaining data were used as the test dataset. This method was repeated for three times for obtaining the average classification rates. Thus, the correct diagnosis performance of the suggested GA-WK-ELM method is computed.
In here, the maximum training accuracy value of the WK-ELM classifier was used as the fitness function of GA. This maximum training accuracy was calculated from the result of training of WK-ELM for each of the individuals by using parameters represented by these individuals. The , , and parameters values of wavelet kernel function and the number of hidden neurons of the WK-ELM classifier are optimized by GA. The PD dataset has 22 relevant features. These features were obtained from 192 patients. So, dimensions of the features vector are 192 × 22. Here, 40 random individuals are selected as the initial population. Each of these individuals has 20 bits. In Tables 2 and 3, coding for parameters of wavelet kernel function and the number of hidden neurons are given, respectively.
An example for individuals of the population is shown in Figure 5. The 1st, 2nd, 3rd, and 4th bits of this individual symbolize the parameter values of the wavelet kernel functions, which are between 1 and 16. The 5th, 6th, 7th, and 8th bits of this individual symbolize the parameter values of the wavelet kernel functions, which are between 1 and 16. The 9th, 10th, 11th, and 12th bits of this individual symbolize the parameter values of the wavelet kernel functions, which are between 1 and 16. The rest of the 20 bits of this individual symbolize the number of hidden neurons, which are between 5 and 260.
The correct diagnosis performance of the suggested GA-WK-ELM method for PD dataset is computed by three The performance of the proposed method is evaluated by the Sensitivity Analysis (SEA) and Specificity Analysis (SPA) and classification accuracies, which are obtained from statistical methods in given equations ((9)-(11)), respectively, are presented in Table 6: the number of correctly classified persons with PD the number of total PD cases , SPA = the number of persons correctly classified as healthy the number of total healthy persons .
The overall classification correct ratio of the proposed method (OC) is calculated as (7): OC = the number of correct classifications the number of total cases .
In this experimental study, a genetic algorithm structure was designed for deciding the , , parameters values of wavelet kernel function and the number of hidden neurons of the WK-ELM classifier. A total of 20 bits are used for each of the individuals in the initial population. In this GA structure, the 1st, 2nd, 3rd, and 4th bits of the individual give the parameter value, the 5th, 6th, 7th, and 8th bits of this individual give the parameter value, and the 9th, 10th, 11th, and 12th bits of the individual give the parameter value, respectively. The remainder of the 20 bits of the individual give the number of hidden neurons, which are between 5 and 260.
Obtained Results
In these experimental studies, an expert diagnosis system for PD based on the GA-WK-ELM method is introduced.
Then, the correct PD diagnosis performance of the suggested GA-WK-ELM method is also evaluated by classification accuracy, sensitivity and specificity analysis, and ROC curve, respectively.
The suggested GA-WK-ELM method is used for finding the optimum values of the wavelet kernel function , , and parameters and the number of ELM classifier hidden neurons in these experimental studies. The comparing results by using the GA-WK-ELM method and the classic ELM classifier by using the same PD database can be given in Table 4. In these classic ELM classifiers, each of sigmoid, tangent sigmoid, triangular basis, radial basis, hard limit, and polykernel functions is used as the kernel function, respectively. The readers can find more detailed information about these kernel functions in [21,22]. As shown in this table, the best correct diagnosis rate of the suggested GA-WK-ELM method is found as 96.81% by using 15, 3, 10, and 86 values for the , , wavelet kernel function parameters and the number of hidden neurons, respectively.
As shown in Table 4, the highest correct PD diagnosis rate is obtained as 96.81% by using the suggested GA-WK-ELM method, because the optimum values of the WK-ELM , , parameters and the numbers of hidden neurons of WK-ELM were obtained by using a genetic algorithm (GA) in these experimental studies. In this study, after finding the optimum parameters, we do not need to use GA and then the WK-ELM can be directly used.
In Table 5, to show the validity of the suggested GA-WK-ELM method, compared results with previous studies using the same dataset [6][7][8][9][10][11][12] are also given. From this table, the highest diagnosis rate is calculated as 94.72 by [13] by using the neurofuzzy classifier with linguistic hedges (ANFIS-LH). In here, training times have not been given in these studies. Moreover, the feature vector is randomly reduced to a lower dimension in [13]. The suggested GA-WK-ELM method in this study shows a correct diagnosis performance even though the feature vector is directly used without reduction. However, the training time of WK-ELM is extremely short. The obtained PD diagnosis accuracies by statistical evaluation criteria are given in Table 6.
In this study, ROC curves and AUC values are calculated by using TP, TN, FP, and FN, which are true positives, true negatives, false positives, and false negatives, respectively [27]. The used ROC curve in this study is a graphical plot. It shows the performance of a binary classifier system as its discrimination threshold is varied. The ROC curve is formed by plotting the true-positive rate against the false-positive rate at various threshold settings. In here, the true-positive rate is also known as sensitivity in biomedical informatics or recall in machine learning. The false-positive rate is also known as the fallout. It can be calculated as 1 − specificity. So, the ROC curve is the sensitivity as a function of the fallout.
ROC analysis supplies tools to choose the optimal models. Moreover, it eliminates the suboptimal ones independently from the class distribution or the cost context. ROC analysis is related in a direct and natural way to cost/benefit analysis of diagnostic decision-making. In here, the ROC curve of GA-WK-ELM is given by using the obtained best TP, TN, FP, and FN values in Figure 6. The obtained AUC value of ROC curves by using the GA-WK-ELM classifier can be given as 0,9576.
Discussion and Conclusion
This paper suggests an expert PD diagnosis system based on GA-WK-ELM. The proposed GA-WK-ELM PD diagnosis system has advantages such as finding of the optimal , , and parameters combination of wavelet kernel, direct using of feature vector, fast training and testing time, and generalization capability over conventional neural networks with backpropagation. The suggested GA-WK-ELM method is formed from two stages as WK-ELM classification and optimization of WK-ELM classifier's parameters. The feature vector from Parkinson dataset is used as input to the WK-ELM classifiers. In wavelet kernel-Extreme Learning Machine (WK-ELM) structure, there are three adjustable , , and parameters of wavelet kernel. These , , and parameters and the numbers of hidden neurons play a major role in the performance of WK-ELM. Because of this, values of these , , and parameters and numbers of hidden neurons should be carefully set based on the solved diagnosis of the PD problem. In this paper, the optimum values of these wavelet kernel parameters and the numbers of hidden neurons of WK-ELM were calculated by using GA. The output of WK-ELM makes decisions about diagnosis of PD. The optimum values of the wavelet kernel , , and parameters and numbers of hidden neurons of the WK-ELM classifier are calculated by a GA to obtain the best possible PD diagnosis performance. The feasibility of the suggested GA-WK-ELM method has been tested by using PD dataset. This dataset has 192 test cases. The suggested GA-WK-ELM method has effective PD diagnosis performance when compared with previous studies depending on direct using of the same feature vector and training time as shown in Tables 4-6 and Figure 6. | 6,014.4 | 2016-05-05T00:00:00.000 | [
"Computer Science"
] |
Metabolite Profiling of Compounds from Sargassum polycystum using UPLC-QToF-MS/MS
profile of
INTRODUCTION
In Indonesia, there are many types of seaweed that have high economic value. One of them is brown seaweed (Sargassum polycystum). S. polycystum has the potential to be used as a raw material in industry and as a medicinal plant. It has secondary metabolites like alkaloids, glycosides, tannins, and steroids that are good for your health and are widely used in medicine and the pharmaceutical industry. 1 They also contain bioactive compounds such as fucoxanthin, steroids, phlorotannins, flavonoids, and saponins. [2][3][4] The development of research on medicinal plants is increasing to get the benefits of the medicinal content of these plants for health. Therefore, it is important to ensure the quality of medicinal plants meets the requirements. The factors that can affect differences in chemical composition and quantity of a compound in plants are growing environmental conditions such as climate, growing media, the altitude where it grows, and metabolic processes (anabolism and catabolism) and their biosynthetic pathways. 5 Thus, an analytical method is needed that can identify the diversity of metabolome profiles (total metabolites present in the sample). One method that can be used in determining the diversity of metabolite profiles is the metabolomic approach. Metabolomics is the study of metabolite profiles in isolated biological samples, tissues, and cells. The aim is to identify all analytes, their concentrations, and metabolite profiles in plants. 6 Metabolite profiling analysis can use several techniques, namely a combination of chromatography and spectrophotometry. These techniques can provide detailed chromatographic profile of the detected sample. 7 Using the UPLC-QToF MS/MS equipment, this study seeks to determine the metabolite profile of brown seaweed (Sargassum polycystum) from Sumenep, Madura Island, Indonesia. Using the Solid Phase Extraction (SPE) technique, extracts and fractions were obtained. The samples were subsequently placed within the ACQUITY UPLC® H-Class System's MS Xevo G2-S QToF detector (Waters, USA). The samples were separated on an ACQUITY BEH C18 column (1.7 m x 2.1 x 50 mm) with acetonitrile + 0.05% formic acid and water + 0.05% formic acid as mobile phases, with a flow rate of 0.2 ml/min. The results of the UPLC-QToF-MS/MS analysis were processed using MassLynx 4.1 software to obtain chromatogram data and m/z spectra of each detected peak. The detected compounds were further confirmed using the ChemSpider and MassBank online databases.
Extraction and fractionation
S. polycystumwas extracted with 96% ethanol using an ultrasonic method (Soltec Sonica 5300EP S3, Italy) for 3x5 minutes. Then it was filtered, and the filtrate was evaporated with a rotary evaporator Heidolph G3 at a temperature of 50°C with a rotation speed of 70 rpm. The ethanol extract then suspended in water at a 1:10 ratio. In a separating funnel, the water suspension (aqueous phase) was combined in a 1:1 ratio with n-hexane for liquid-liquid fractionation. The shaking procedure was repeated multiple times. After separating the n-hexane phase from the water phase, the n-hexane fraction of seaweed is evaporated using a rotary evaporator. The extracted and separated aqueous phase is then combined with ethyl acetate in a 1:1 ratio in a separating funnel for liquid-liquid fractionation using the same processes. The same procedures were followed to create the seaweed water fraction.
Metabolite profiling
Utilizing the UPLC-QToF-MS/MS technology, the metabolite profiling procedure was carried out at the Forensic Laboratory Center of the Indonesian National Police Criminal Investigation Agency. The SPE method was utilized throughout the process of extract and fraction preparation. Following this step, the samples were introduced into the ACQUITY UPLC® H-Class System's MS Xevo G2-S QToF detector (Waters, USA). The samples were separated using an ACQUITY BEH C18 column that was 2.1 m long and 50mm wide diameter and had a flow rate of 0.2 ml/min. The mobile phases consisted of acetonitrile + 0.05 % formic acid and water + 0.05 % formic acid. The findings of the UPLC-QToF-MS/MS analysis were processed with the MassLynx 4.1 software in order to get chromatogram data as well as the m/z spectra of each observed peak. Using the internet databases ChemSpider and MassBank, further verification of the chemicals that were found was carried out.
RESULTS AND DISCUSSION
Metabolite profiling was carried out to predict the content of compounds contained in extracts and fractions of S. polycystum. 8 Metabolite profiling was carried out with the UPLC-QToF-MS/MS instrument. Previously, extracts and fractions from S. polycystum had been prepared using the Solid Phase Extraction (SPE) method. The advantage of sample preparation with SPE is that it can separate impurities from the sample to produce higher spectral sensitivity. 9 Metabolite profiling of extracts and fractions S. polycystum was carried out with the UPLC-QToF-MS/MS instrument, which had previously been prepared using the Solid Phase Extraction (SPE) method. Analysis of the blank total ion chromatogram (TIC) was determined before determining the TIC of the compound in the sample so as not to cause bias when identifying the sample. A mass spectral analysis of each TIC peak was performed using MassLynx 4.1 software and confirmed with ChemSpider and MassBank online databases. (Table 1, Table 2, Table 3, and Table 4).
According to the findings of metabolite profiling carried out with UPLC-QToF-MS/MS, the extract and a fraction of S. polycystum include a total of 232 compounds. Of these compounds, 168 are known compounds, while the remaining 64 are unknown compounds. In the process of metabolite profiling, it is not possible to identify all of the peaks in the TIC based on the sum of all of the discovered chemicals. This is demonstrated by the presence of chemicals with uncertain identities in each extract and fraction. Compounds that are unable to be recognized in the database are referred to as unknown compounds. These compounds may be impurities or breakdown products that are still picked up by the instrument, or they may be new compounds that aren't in the database yet, particularly unknown compounds with high levels. 10,11 Either way, the instrument may still be able to detect them. L-Valyl-L-isoleucyl-L-isoleucyl-Lalanyl-L-α-aspartyl-L-cysteinylglycyl-L-α-glutamyl-L-tyrosine L-Seryl-L-asparaginyl-L-glutaminyl-L-α-glutamyl-L-tyrosyl-L-leucyl-L-αaspartyl-L-leucyl-L-serine The results of metabolite profiling performed on 96% ethanol extract showed a total of 61 compounds, 46 of which were known compounds and 15 of which were unknown compounds; the n-hexane fraction showed a total of 55 compounds, consisting of 38 known compounds and 17 unknown compounds; the ethyl acetate fraction showed a total of 67 compounds, consisting of 45 known compounds and 22 unknown compounds; and the water fraction showed a total of 49 The interpretation of these metabolites reveals that several dominant compounds or major compounds have higher levels (indicated by percent area) when compared to the levels of other compounds found in the sample. This is the case because these levels are higher than the levels of the other compounds. In the 96% ethanol extract, the major components were 2-methyl-2-(3-oxobutyl)-1,3-cyclohexanedione with a percent area of 27.6748%; in the n-hexane fraction, the major components were seryllysylvaline with a percent area of 29.8551%; in the ethyl acetate fraction, the major components were 2-methyl-2-(3oxobutyl)-1,3-cyclohexanedione with a percent area of 41.4148 %; and in water fraction, the major components were ectoin with a percent area of 29.9702 %.
CONCLUSION
The 96 % ethanol extract of S. polycystum indicated a total of 61 compounds, including 46 known compounds and 15 unknown compounds; the n-hexane fraction indicated a total of 55 compounds, including 38 known compounds and 17 unknown compounds; the ethyl acetate fraction indicated a total of 67 compounds, including 45 known compounds and 22 unknown compounds; and the water fraction indicated a total of 49 compounds, including 39 known compounds and 4 unknown compounds. | 1,757 | 2023-06-30T00:00:00.000 | [
"Chemistry",
"Medicine",
"Environmental Science"
] |
A 0.9% Calibration of the Galactic Cepheid luminosity scale based on Gaia DR3 data of open clusters and Cepheids
We have conducted a search for open clusters in the vicinity of classical Galactic Cepheids based on high-quality astrometry from the third data release (DR3) of the ESA mission Gaia to improve the calibration of the Leavitt law (LL). Our approach requires no prior knowledge of existing clusters, allowing us to both detect new host clusters and cross-check previously reported associations. Our Gold sample consists of 34 Cepheids residing in 28 open clusters, including 27 fundamental mode and 7 overtone Cepheids. Three new bona fide cluster Cepheids are reported (V0378 Cen, ST Tau, and GH Lup) and the host cluster identifications for three others (VW Cru, IQ Nor, and SX Vel) are corrected. The fraction of Cepheids occurring in open clusters within 2 kpc of the Sun is $f_{CC,2kpc} = 0.088^{+0.029}_{-0.019}$. By combining cluster and field Cepheids, we calibrate the LL for several individual photometric passbands, together with reddening-free Wesenheit magnitudes based on Gaia and HST photometry, while solving for the residual offset applicable to Cepheid parallaxes, $\Delta \varpi_{\mathrm{Cep}}$. The most direct comparison of our results with the SH0ES distance ladder yields excellent ($0.3\sigma$) agreement for both the absolute magnitude of a 10d solar metallicity Cepheid in the near-IR HST Wesenheit magnitudes, $M_{H,1}^W=-5.914\pm 0.017$ mag, and the residual parallax offset, $\Delta \varpi_{\mathrm{Cep}}=-13 \pm 5\,\mu$as. Using the larger sample of 26 Gold cluster Cepheids and $225$ MW Cepheids with recent Gaia DR3 astrometry and photometry, we determine at solar metallicity $M_{G,1}^W = -6.004 \pm 0.019$\,mag and $\Delta \varpi_{\mathrm{Cep}}=-19 \pm 3\,\mu$as. These results mark the currently most accurate absolute calibrations of the Cepheid luminosity scale based purely on observations of MW Cepheids.
Introduction
The absolute calibration of the classical Cepheid luminosity scale is fundamental for a distance estimation in the nearby Universe and the accurate measurement of Hubble's constant, H 0 .The third data release (DR3) of the ESA mission Gaia has provided astrometry of unprecedented quantity and quality (The Gaia collaboration et al. 2016, 2021) for approximately 1.5 billion stars in the magnitude range 3 < G < 21, including 14992 classical Cepheid stars (Eyer et al. 2022;Ripepi et al. 2022b) with an average parallax uncertainty of 70 µas.Because the parallax is generally considered the gold standard of geometric distance measurements, the Gaia parallaxes are of crucial importance for the absolute calibration of Leavitt's law Leavitt & Pickering (1912, henceforth: LL), also known as the periodluminosity relation, and they are of great interest for all further applications of Cepheids as distance tracers.In particular, Gaia parallaxes are required to clarify the implications of the current 5σ discrepancy between the value of H 0 measured using Tables 1 -4, 7, 10 and A.1 are available in electronic form at the CDS via anonymous ftp to cdsarc.cds.unistra.fr(130.79.128.5) or via https://cdsarc.cds.unistra.fr/cgi-bin/qcat?J/A+A/ a distance ladder composed of classical Cepheids and type Ia supernovae (e.g., Riess et al. 2022b) and the value of H 0 inferred from observations of the ESA mission Planck of the cosmic microwave background assuming a flat ΛCDM Universe (Planck Collaboration et al. 2020).
However, Gaia-based LL calibrations based on Cepheid parallaxes must currently simultaneously solve for a residual parallax offset due to systematics of the Gaia data processing in addition to the LL intercept and slope (e.g., Riess et al. 2021).Because this simultaneous parallax offset determination reduces the precision to which Gaia can calibrate the LL, strategies for mitigating this problem are needed.Lindegren et al. (2021, henceforth: L21) derived corrections to the zeropoint offset of about 10 − 30 µarcsec, whose exact value depends nontrivially on the magnitude of the observed source, its position in the sky, and its color.Several studies (not necessarily based on Cepheids) have investigated residual (compared to Lindegren's correction) zeropoint offsets, generally finding good agreement at magnitudes (G 13 mag) at which L21 is well calibrated (e.g., Huang et al. 2021;Riess et al. 2021;El-Badry et al. 2021), whereas an offset remains at the brighter end, where the L21 calibration was based on fewer sources.The origins of these residual offsets are complex and not yet fully understood, although it is likely that they originate from differences between the Cepheid and quasar samples, with Cepheids being systematically brighter, of redder intrinsic color, and photometrically and chromatically variable.Moreover, the Milky Way Cepheids that were used to calibrate the LL fall within a magnitude range (G 13 mag) that requires special observational and data-processing steps to avoid saturation (including the gating mechanism to avoid saturation and changing from 2D to 1D image processing for the astrometric model, cf.L21).
An interesting possibility for avoiding difficulties related to this zeropoint systematic could be the use of parallax information derived from stars that are observationally as similar as possible to the objects used to determine the Gaia systematics.Because Cepheids are relatively young stars (< 300 Myr), they are occasionally found in open star clusters (cf.Anderson et al. 2013, and references therein), whose brightest main-sequence members will tend to be bluer than Cepheids, and several magnitudes fainter.At the same time, open clusters contain many stars, so that an average cluster parallax will benefit from a √ N improvement in precision, eventually limited by the angular covariance of the Gaia parallaxes (Lindegren et al. 2021;Apellániz et al. 2021;Vasiliev & Baumgardt 2021;Zinn 2021).
The currently most common approach to identifying cluster Cepheids is to consider cluster input lists from studies based on Gaia astrometry (Cantat-Gaudin & Anders 2020; Hunt & Reffert 2021;Castro-Ginard et al. 2022;Zhou & Chen 2021;He et al. 2022) and to then compare the astrometric parameters of Cepheids with the average cluster parameters (Anderson et al. 2013;Breuval et al. 2020;Zhou & Chen 2021;Medina et al. 2021).However, there is no guarantee that all Cepheid-hosting clusters have been detected so far, and the selection function of clusters is not well known.It is also rather common for Cepheids to reside in the coronae of their host clusters, that is, farther from the center than the typical cluster core radius of ∼ 4 pc (e.g., Anderson et al. 2013).This is to some extent expected from the clustered star formation process that causes the majority of birth clusters to disperse into the field over timescales of tens of million years (Dinnbier et al. 2022).Tidal deformations further cause cluster shapes to deviate from circular over hundreds of million years, thus breaking the symmetry of the appearance and complicating the detection of cluster members against a highly contaminated background (Boffin et al. 2022).Additionally, it is quite common for multiple clusters to exist relatively close to each other on the sky (Turner 1998) because of the high density of clusters in spiral arms and the superposition on sky of multiple spiral arms.Substantial and spatially variable extinction can further complicate the issue.To most reliably determine the most complete sample of cluster Cepheids detectable with Gaia DR3 data, we therefore adopted the approach of searching for clusters in the vicinity of Cepheids, rather than the other way around.
A major improvement of the extragalactic distance ladder built by the SH0ES project (Riess et al. 2022b) has been the photometric homogeneity of Cepheid observations carried out exclusively in the Hubble Space Telescope (HST) photometric system.With the release of time-series observations in Gaia DR3, there is now an additional data set of very high quality, well-resolved multichromatic observations based on a wellcharacterized and homogeneous photometric system that includes observations of Milky Way and Local Group Cepheids, reaching Cepheids as far as M31 and M33 (Evans et al. 2022), albeit with increased uncertainties due to higher instrumental noise and higher crowding.The goal of this paper is to leverage these unprecedented data sets to achieve the most accurate absolute calibration of the MW LL in well-characterized filters, notably including the reddening-free near-IR HST Wesenheit function used by the SH0ES team to measure the Hubble constant (Riess et al. 2022b), while simultaneously solving for the residual parallax offset of Cepheids.
This article is organized as follows.Section 2 describes our method for detecting and estimating the parameters of clusters in the physical vicinity of MW Cepheids based on Gaia data and the estimation of membership probabilities for the Cepheids.Section 3 separates the sample of cluster Cepheid candidates into Gold, Silver, and Bronze samples.Section 4 presents the simultaneous calibration of the Cepheid LL in multiple photometric bands and an LMC-based cross-check of the L21 corrections applied to cluster member stars.Section 5 presents an additional discussion, and Sect.6 lists our conclusions.Additional tables and figures are provided in the online appendix.
Method
The starting point of our analysis was the list of positions of 3352 Milky Way classical Cepheids that were classified by the OGLE collaboration based on a large combination of all-sky time-series survey data (Pietrukowicz et al. 2021), which we extended by 230 additional classical Cepheids that were reported by Gaia DR3 in June 2022 (Ripepi et al. 2022b;Eyer et al. 2022).While there can be disagreements over Cepheid classifications, especially for overtone Cepheids with sinusoidal light curves, we note that the list by Pietrukowicz et al. (2021) was used to validate the Gaia DR3 sample, and that the samples of MW Cepheids overlap by ∼ 85% .Extragalactic Cepheids and Cepheids that are too distant for identifying clusters were removed from the Gaia DR3 sample by the quality cuts explained in Sec.2.1 and by requiring Cepheids to be brighter than G < 16 mag.For each Cepheid we considered, we retrieved all stars within a radius of one degree from the Gaia archive1 and then searched for host clusters as explained in the following and as illustrated schematically in Fig. 1.
Cluster detection
Because clusters are gravitationally bound systems, cluster members share similar positions (RA, DEC), proper motions (µ α * , µ δ ), parallaxes ( ), and radial velocities.Thus, stars belonging to a common cluster can be separated from fore-or background stars as overdensities in the multidimensional space spanned by the available membership constraints.
Gaia DR3 provides information for all six of these parameters, although radial velocity information is only available for a rather limited number of stars owing to the faintness of most member stars.Hence, our analysis employs only positions, proper motions, and parallaxes for the cluster identification.Where available, radial velocity information was used to assess membership probabilities of Cepheids (cf.Sec.2.3).
We detected clusters using the publicly available code called hierarchical density-based spatial clustering of applications with noise (McInnes et al. 2017, HDBSCAN).As is common practice, we included only stars in our analyis whose parallax signalto-noise ratio /σ ≥ 5, whose renormalised unit weight error (RUWE) is smaller than 1.4 to exclude sources with poor astrometry (Fabricius et al. 2021, e.g., companions), and that are brighter than G = 18 mag (parameter phot_g_mean_mag in table gaiadr3.gaia_source),where the Gaia astrometry is most precise.In practice, this magnitude cut represents no serious limitation for our work and allows us to clearly recover the main sequences of Cepheid-hosting clusters that are several magnitudes fainter than their Cepheid members.
Gold, Silver and Bronze samples
Gaia parallaxes of all stars considered for membership were corrected for systematics using the recipe provided by L21.
The code HDBSCAN uses the n−dimensional distance between objects to identify overdense regions.As Cepheids and open clusters are located in the Galactic plane, we used Galactic coordinates (l, b) for the positional constraints rather than RA and DEC.The ability of HDBSCAN to detect arbitrarily shaped clusters was particularly useful for our purposes because the physical shape of clusters in various stages of dispersal was not known a priori.The only fixed input parameter required by HDBSCAN is the number of stars p that are expected to qualify an overdensity as a cluster.Deviations of the number of cluster stars s from p will cause overdensities with s < p to remain undetected by HDBSCAN and may sometimes result in a single cluster being split into multiple parts if s > p.To ensure that our analysis was not sensitive to these undesirable side effects, we repeated our analysis using ten different values for p ranging from 10 to 100 in increments 10 and found a consistent number of cluster members in each case.The mean (median) number of member stars reported per cluster is 230 (152) (cf.Sect.3).
Following Castro-Ginard et al. (2018) and Hunt & Reffert (2021), we rescaled each of the Gaia astrometric parameters to variables with zero mean and unit standard deviation by subtracting the mean from each parameter and rescaling parameters such that the 25−75% percentile has unit variance.This procedure ensures equal weighting among the five dimensions and improves robustness against outliers.
Inspection of the parallax distributions returned by HDB-SCAN revealed outliers in parallax.To retain only likely cluster members, we determined the mode of the parallax distribution returned by HDBSCAN and retained all cluster members whose parallaxes agreed to within 3 standard deviations of a Gaussian fit to the parallax distribution centered on the mode.
At distances beyond 2 kpc, cluster identification becomes increasingly limited due to the current parallax and proper motion uncertainties of Gaia.Because our goal of calibrating the Galactic LL requires utmost accuracy and precision, we prioritized greater purity (lower contamination) at the potential cost of completeness.We thus visually inspected all identified cluster candidates to ensure that cluster stars were overdense in each of the membership constraints considered and that the resulting color-magnitude diagrams indicated a coeval population being detected, as evidenced by a clearly visible main sequence.Additionally, we discarded clusters in which a majority of main-sequence stars exceeded the brightness of their candidate Cepheid members.
For each cluster, HDBSCAN provided a list of likely cluster members together with membership probabilities.By design, all identified clusters were within the projected vicinity of Cepheids.However, these same Cepheids were not necessarily selected as cluster members by HDBSCAN, requiring a separate membership analysis for Cepheids in the detected clusters (cf.Sec.2.3).
Cluster parameters
For each cluster that passed the first visual screening, we computed the center position in RA and DEC.We additionally computed averages and dispersions in both proper motion directions, parallax, and radial velocity, where available.
Cluster parallaxes
For a given source, the Gaia parallax systematics are well known to depend on its sky position as well as its magnitude and color (L21).Magnitude and color trends are likely related to the sophisticated on-board processing of Gaia, which was implemented to avoid saturation across the extreme dynamic range of the survey (limit 21.7 mag).Using the cluster members returned by HDBSCAN, we investigated whether an optimal magnitude and color range could be identified to obtain the most reliable and precise average cluster parallax.We calculated the deviation from the cluster average, ∆ = − i , for all member stars of all host clusters.We combined all values of ∆ into a single set, which we divided into bins of 0.2 mag in G band.For each bin, we estimated the weighted mean and the weighted error of ∆ .Figure 2 illustrates this result and shows a noticeable decrease in the variance of ∆ for 12.5 < G < 17 .Systematic trends at G < 12.5 mag can be partially due to the gating mechanism of Gaia or to differences in photometric processing 2 .We note that the exact magnitude range is not critical for the estimation of the cluster parallaxes.For example, restricting the magnitude range further to 13.5 − 17 mag changes the mean cluster parallax by less than 2 µas, while increasing the uncertainty in average parallax for clusters with 100 members (e.g., CWNU 175 or vdBergh 1) by approximately 1 µas (cf.Sect.3).Because of these clear and consistent trends and to avoid sensitivity to gatingrelated issues, we adopted the range of 12.5 < G < 17 mag as the optimal range for determining high-fidelity average cluster parallaxes and their uncertainties.We further restricted the color range of member stars to 0.23 < Bp − Rp < 2.75 to avoid the color range for which Fig. 2 shows increasing deviations from zero residuals, accompanied by increasing uncertainties due to low statistics.Several studies have shown the existence of nonzero residual parallax offsets for stars brighter than G < 13 mag after the L21 corrections were applied (e.g., Huang et al. 2021;Zinn 2021;El-Badry et al. 2021;Riess et al. 2021;Riess et al. 2022a).However, analyses of open and globular clusters, as well as of the LMC, have shown the L21 procedure to accurately correct parallax systematics to within ∼ 1 µas (Flynn et al. 2022;Maíz Apellániz 2022) in the optimal magnitude and color range established above.As a result, a significant nonzero residual parallax offset is expected for (bright) Cepheids, while no residual parallax offset is expected for cluster members after the L21 corrections are applied.
The final cluster parallaxes were computed as the weighted mean of the retained cluster members.The total parallax uncertainty sums the statistical uncertainty determined as the error on the weighted mean in quadrature and the systematic contribution due to angular covariance determined by Apellániz et al. (2021).Because our initial search radius around Cepheids is 1 deg, the full diameters of all clusters is significantly smaller than 2 deg.This allowed us to consider the estimation of angular covariance 2 According to Fig. 1 in L21, no gating is applied to stars fainter than 12.5 mag.However, the WC0b and WC1 calibration models of the astrometric field overlap in the range 12.5 < G < 13 mag, which implies a transition from 2D images to binned 1D images, respectively.based on the LMC alone as given by V ,LMC in their Eq. 10 (cf. also Sect.2.2 in Ripepi et al. 2022c), neglecting wide-angle contributions estimated using quasars.This is analogous to the approach taken by Zinn (2021) in conjunction with the angular covariance estimates based on the Kepler field.In practice, this reduces the error floor for average cluster parallaxes from 10 to 7 µas.Because the mean separation of our Cepheid clusters is very large, covariance among clusters is negligible.
Maximum angular separations
We calculated the projected distance of the Cepheid from cluster center assuming that both objects are at the distance of the cluster.Candidate associations with separations greater than 25 pc were discarded in favor of sample purity and to ensure that cluster average parallaxes can be used as accurate proxies for Cepheid parallaxes.Hypothetical Cepheids residing in extended tidal tails (Jerabkova et al. 2021) would thus be excluded from our analysis.We refer to Cepheids as coronal cluster members if their projected separation from cluster center exceeds 8 pc but does not exceed 25 pc.
Proper motions
We computed bulk cluster proper motions as the mean of all clusters members as well as proper motion dispersions using cluster members in the color and magnitude range used for parallaxes.We used proper motions to reject cluster candidates as spurious asterisms if the resulting velocity dispersion exceeded reasonable values for gravitationally bound systems following Cantat-Gaudin & Anders (2020) and Hunt & Reffert (2021).Specifically, up to = 0.67 mas, we rejected associations whose projected velocity dispersion exceeds 5 × √ 2 [mas/yr] (5 km s −1 ), whereas a maximum difference of 1 mas yr −1 was allowed for clusters with a smaller parallax to reflect the increased uncertainties, in particular, of the fainter main-sequence cluster members.Thus, we required3 In practice, however, all retained clusters exhibit a significantly lower velocity dispersion, with a mean value of 2.8 km s −1 (cf.Fig. A).Inspection revealed that the proper motion dispersion estimated using only spatially densely concentrated cluster members returned by our clustering analysis underestimated the intrinsic velocity dispersion of true cluster members observed at large angular separations, which require a statistically greater velocity dispersion to reach their large separations from cluster centers.To avoid unrealistically low membership probabilities for coronal cluster Cepheids (cf.Sec.2.3), we therefore adopted twice the standard deviation determined based on the member stars recovered by HDBSCAN as the more conservative estimate of true cluster proper motion dispersion when assessing Cepheid membership in clusters.
Radial velocity
Cluster radial velocities (RV) are computed using Gaia DR3 mean radial velocities (Katz et al. 2022, parameter radial_velocity from table gaia_source).For each cluster with available DR3 RVs, Table 1 lists the number of (non-Cepheid) cluster member stars, their median RV, standard error on the cluster median RV, and the Cepheid paramaters.We did not consider cluster RVs based on few stars ( 3) sufficiently reliable for further analysis.Thus, we did not consider RV as a membership constraint for the candidate host clusters of WX Pup, CV Mon, IQ Nor, and SX Vel.
Cepheid membership determination
We computed cluster membership probabilities for Cepheids whose proper motions and parallaxes separately agreed to within approximately 3σ of their potential host cluster parameters.This subsection presents our method, and the resulting probabilities are presented in Sec. 3. A tolerance of up to 0.5σ was permitted in this initial screening.In this context, σ refers to combined (square-summed) dispersions or uncertainties, depending on the parameter, of clusters and Cepheids as follows.For proper motions, the cluster dispersion as described in Sec.2.2.3 was combined with the Cepheid uncertainties reported by Gaia.For parallaxes, σ contains the squared sum of uncertainties of the weighted cluster average (no significant internal dispersion expected), the individual Cepheid parallax uncertainty, and an additional 15 µas uncertainty to reflect the magnitude dependence of the residual parallax offset after applying the L21 corrections.
We computed Cepheid membership probabilities using the likelihood formalism developed in Anderson et al. (2013) and the membership constraints , µ * α , µ δ , and RVs.Strictly speaking, this approach performs a hypothesis test under the null hypothesis of the Cepheid cluster membership and can only reject this null hypothesis, not prove it.As in Anderson et al. (2013), we computed the Bayesian likelihood where the vector x contains the differences between Cepheid and cluster parameters, that is, and Σ is the diagonal covariance matrix containing the squared values of σ for the various membership constraints, as explained above.Our threshold for rejecting the membership hypothesis was P(B|A) < 0.0027, which corresponds to a 3σ rejection criterion.Stars with a higher probability are considered bona fide cluster Cepheids provided the host cluster detection is sufficiently robust.Radial velocities were included in this calculation if cluster average RVs (v r,Cl ) could be estimated using at least three member stars and if Cepheid systemic radial velocities, v γ , could be determined using a Fourier series fit to time-series data from either the velocities of Cepheids project (cf.Anderson et al. in prep., VELOCE I) or the literature (e.g., Anderson et al. 2016a).In addition to cluster average values, Table 1 lists RV data for Cepheids, including v γ , its uncertainty, references to data used, and the difference between cluster median and Cepheid v γ , the total uncertainty (summed in quadrature), and the difference between cluster and Cepheid in units of the total uncertainty.The only Cepheid for which RV information significantly contradicts membership is XZ Car, which is part of our Silver sample (cf.Sect.3.2).All other stars are found to agree to within 1.35σ with their host cluster median velocities.Further information about Cepheid RVs and Gaia DR3 radial velocities of Cepheids will be provided as part of the VELOCE project (Anderson et al. in prep.).
In contrast to Anderson et al. (2013), we did not explicitly use the angular separation as an external multiplicative prior because individual cluster members were already separated from the background by our clustering analysis.However, our use of a maximum allowed projected separation of 25 pc could be seen as a flat prior with P(A) = 1 for absolute projected separations smaller than this cutoff value.Ages and chemical compositions were not considered in the calculation of the likelihood.
Cluster Cepheids
We grouped our sample of cluster Cepheids into Gold, Silver, and Bronze samples according to the following criteria.The Gold sample contains cluster Cepheids whose host cluster detections were robust and whose membership likelihoods exceeded the threshold for rejecting the membership hypothesis (cf.Sec.2.3).This sample is best suited for LL calibration.The Silver sample contains cases where the host cluster detection is solid, whereas the likelihood computation quantitatively rejects cluster membership due to a difference slightly larger than 3σ in individual constraints.This sample is of particular interest for the further study to refine possible membership, for instance, taking uncertainties related to stellar multiplicity into account.The Bronze sample is composed of two cases for which the host cluster detection is not as clean as in the Gold sample.
Tables 2-4 list the Cepheids and their host clusters for the Gold, Silver, and Bronze samples, along with their main astrometric information.Representative examples of each set are shown in Figs. 3 and 4. We applied an additional uncertainty of 15 µas when we computed the significance of the disagreement in parallax (cf.Sec.2.3).Table A.1 provides a list of the Gaia EDR3 source ids for all the cluster members and their L21 corrected parallaxes.
Gold sample
The Gold sample consists of 34 Cepheids residing in 28 distinct Galactic open clusters.Out of the 34 Cepheids, 27 Cepheids pulsate in the fundamental mode, and 7 Cepheids pulsate in the first overtone.We identify ST Tau, V0378 Cen, and GH Lup as bona fide cluster Cepheids for the first time.
We cross-matched all 28 Gold sample host clusters with cluster catalogs from the literature (Anderson et al. 2013; Usenko Notes.RV differences between clusters and Cepheids are considered significant only if a sufficient number (here: 3) of cluster stars was available to determine an accurate median for the cluster.The last column shows apparently highly discrepant values in parentheses if they are based on an insufficient number of stars.References listed in Column 'Refs' a: Barnes et al. (1988) He et al. 2022;Hunt & Reffert 2021;Medina et al. 2021).We found cluster parameters in agreement to within 1σ of the previously reported parameters in the literature for 24 of them.However, we found disagreements greater than 2σ among at least one of the astrometric parameters for the host clusters of SX Vel, IQ Nor, and VW Cru.
Last but not least, we identified four entirely new clusters that host one Cepheid each.We denoted them by the prefix Cl followed by the Cepheid name.Additional information for a subset of Gold sample cluster Cepheids is provided below.SX Vel is found to be a member of a newly detected host cluster (Cl SX Vel, d = 2012 ± 29 pc) at a projected separation of 9.6 pc.The presence of multiple clusters in close proximity somewhat complicates this membership analysis.Anderson et al. (2013) investigated multiple possible host clusters, including Bochum 7, NGC 2660, FSR 1441, SAI 94, and Ruprecht 70, and we here add NGC 2659.Membership in Bochum 7 (5754 pc; cf.Kharchenko et al. 2005) and SAI 94 (3515 ± 60 pc; cf.Elsanhoury & Amin 2019) is readily excluded based on distance, while proper motion differences exclude membership in NGC 2660, and FSR 1441(Cantat-Gaudin et al. 2018).However, NGC 2659 and Ruprecht 70 require some discussion because the computed likelihoods for cluster membership are consistent with the hypothesis of membership for both and because the likelihood obtained for NGC 2659 is even higher (0.65) than for Cl SX Vel (0.17).However, closer inspection revealed that the higher likelihood for NGC 2659 is driven by weaker proper motion constraints (twice larger dispersion).The parallaxes of both clusters agree to within 1.1σ (497 ± 7 µas vs 508 ± 7 muas).Additionally, the observed separation of 43 pc is inconsistent with our maximum allowed separation of 25 pc.Similarly, for Ruprecht 70, the separation of 34 pc rejects this association, although the likelihood alone (0.004) would not reject membership according to our criteria.
IQ Nor is associated with a cluster at a distance of 1839 ± 32 pc.Previously, Anderson et al. (2013) Host cluster parameters.Right: Cepheid parameters.The average cluster parallaxes were estimated using stars in the range 12.5 < G < 17 as explained in Sect.3. The uncertainty includes the contribution from angular covariance.( * ) denotes first overtone pulsators.The second last column states the projected separation of the Cepheid from cluster center in pc.The last column states the membership probability if HDBSCAN considers the Cepheid a member and "-" if not.ATO J297 † abbreviates the full identifier of ATO J297.7863+25.3136.
VW Cru resides in a cluster reported independently as CWNU 175 while this article was in preparation (He et al. 2022).Although Anderson et al. (2013) previously investigated possible membership in Loden 624 (Kharchenko et al. 2013), we note that CWNU 175 is a different physical object separated by 1.9 deg from Loden 624.
WX Pup is a coronal member of the cluster UBC 231 (see also Zhou & Chen 2021) and a good example of how the Gaia systematics can limit the ability to detect host clusters because the cluster and Cepheid parallaxes differed by 3.6σ prior to applying L21 parallax corrections.After applying L21 corrections, this differences reduces to 1.8σ.While the membership likelihood of WX Pupis a relatively low 1% and the projected separation of 22.2 pc is close to our cutoff, its membership in UBC 231 is not rejected according to the criteria we specified.We searched for other cases where the chronological order of the L21 corrections would affect the conclusion concerning membership, but found none.
ATO J297.7863+25.3136was discovered recently (Heinze et al. 2018) and identified as a member of Cluster 41 by Medina et al. (2021).We here confirm this association at a distance of 2456 ± 49 pc.However, this cluster is located in a highly reddened region of the sky, limiting its usefulness for LL calibration (cf.Fig. 5).
SV Vul is especially valuable for LL calibration due to its long period because the majority of Cepheids in distant supernovahost galaxies have periods log P > 1.2 (e.g., Riess et al. 2018).We find a very high likelihood of 90% for this cluster Cepheid combination at a distance of 2354 ± 49 pc, and we note the small ∼ 6.5 pc separation from cluster center.Thus, our analysis confirms previous statements of the SV Vul cluster membership reported by Negueruela et al. (2020) and Medina et al. (2021).Moreover, inspection of several membership constraints as well as the residuals from our LL calibration does not corroborate the possibility that the parallaxes of SV Vul are unreliable, reported by Owens et al. (2022) (cf. Fig. 6 and Sect. 4.3).We therefore find no reason to discard this valuable star from LL calibration.
Silver sample
The Silver sample contains three Cepheids with likelihoods that are formally inconsistent with membership in well-defined clusters according to our criteria.However, disagreements among the individual membership constraints are sufficiently small to warrant additional discussion and inspection.
AP Vel was previously reported as a member of the cluster Ruprecht 65 (Chen et al. 2015) located at a distance of 2085 ± 32 pc.The low membership probability is dominated by the 3.3σ parallax difference.We do note, however, that the proper motion parameters of AP Vel (µ * α , µ δ ) are within 2.3 and 1.7σ of the cluster averages, and that the Cepheid is located rather close to (0.21 deg) from cluster center.
X Pup is located at a rather large separation of ∼ 24.3 pc from the center of its possible newly identified host cluster.The low likelihood is driven by proper motion differences between Cepheid and cluster, which are significant at the level of ∼ 3.1 and 3.3σ for µ * α and µ δ , respectively.However, we note that the total velocity dispersion of Cl X Pup is merely 3.3 km s −1 , which may indicate that an underestimated proper motion dispersion was used to calculate the membership.Additionally, the comparatively large separation from the cluster (cf.Sec.2.2.3) as well as orbital motion tentatively reported by Anderson et al. (2016a) may contribute to deviations in proper motion.We note the good agreement in parallax (1.4σ) and radial velocity, where the Cepheid barycentric velocity is 71.02 ± 0.16 km/s (Anderson et al. in prep), which is fully consistent with the median cluster radial velocity based on four stars reported in Gaia DR3 (74 ± 10 km/s) (cf.Table 1).We therefore consider the cluster membership of X Pup to be potentially underestimated due to an underestimated cluster proper motion dispersion.Further study is required to ascertain its membership before X Pup is included in the Gold sample.
XZ Car is situated at a projected separation of 15 pc from its potential newly identified host cluster Ruprecht 93.Although the parallax of XZ Car fully agrees with that of the cluster, we find a low membership probability due to differences in the kinematic membership constraints, notably radial velocities, which differ by 33 km s −1 between the pulsation-averaged Cepheid RV and the median RV of the 13 cluster members with DR3 radial velocities (cf.Table 1).Although XZ Car is a long-term spectroscopic binary and exhibits a trend of its pulsation-averaged velocity v γ that exceeds 5km s −1 over a baseline of ∼ 40 yr (Anderson et al. 2016a, Shetye et al. in prep.), we caution that orbital motion is unlikely to explain the large RV difference.Additionally, µ * α and µ δ differ by 2.9σ and 2.6σ.We note that evidence of orbital motion has also been found using Gaia proper motion anomalies (Kervella et al. 2019), however, indicating that proper motion may also provide incorrect membership indications for XZ Car.It would be intriguing (but beyond scope for this article) to investigate the nature of the orbit and the companion required to explain these differences.However, XZ Car does not appear to be gravitationally bound to Ruprecht 93.Further membership analysis using the full Gaia temporal baseline might clarify this high-interest association.
Bronze sample
Clusters reported as part of the Gold and Silver samples can be clearly distinguished from field stars in position and proper motion.However, these distinctions were less clear in the case of possible host clusters (tentatively labeled asterisms) reported here as part of the Bronze sample.Additionally, the Gaia CMDs exhibit two main sequences, suggesting likely fore-or background contamination, perhaps by spiral arms being crossed (cf. Figure 4).Unfortunately, the cluster membership probabilities provided by HDBSCAN do not allow us to filter out contaminants.However, there appear to be a clear overdensities in parallax space for stars in the vicinity of both BB Cen and V0620 Pup, and we note that the computed likelihoods for the Cepheid are high and fully consistent with cluster membership, assuming the cluster is real.
Rejected associations
Our analysis refuted the cluster membership of several Cepheids previously considered as cluster members in the literature, and these cases are listed in Table 5. Cepheids considered as possible cluster members in the literature that were not found to be bona fide cluster Cepheids here.
LL and Gaia zeropoint offset calibration
In this section, we calibrate period luminosity relations for MW Cepheids that pulsate in the fundamental mode while simultaneously investigating residual parallax offsets that are applicable after applying the L21 corrections.Section 4.1 describes the observational data for MW Cepheids, Sect.4.2 contains a crosscheck of the expected zero residual offset applicable to cluster parallaxes using the LMC, and Sect.4.3 describes the calibration of the MW LL using combined cluster and field Cepheids.
Milky Way Cepheids
We compiled samples of fundamental-mode MW Cepheids based on the astrometric and photometric quality criteria tabulated in Table 6.The astrometric constraints were compiled such as to reproduce the sample of 68 low-reddening MW Cepheids observed by the SH0ES team using HST (Riess et al. 2018;Riess et al. 2021).However, a larger sample of Cepheids is considered in other photometric bands and using Gaia photometry.Hence, we added cuts based on astrometric goodness-of-fit parameters to remove Cepheids whose astrometry was very likely flawed, such as RX Cam, the only Cepheid for which an orbital parallax solution is available in Gaia DR3.The photometric criteria we adopted include a magnitude cut to avoid saturated stars, a color cut to limit exposure to reddening, a cut on the number of avail-able photometric epochs based on which the mean magnitudes were computed, and the parameter ipd_frac_multi_peak < 7, which was adopted to limit exposure to blended sources.We further adopted a cut on period for the Gaia sample P > 3.9d to avoid exposure to misclassified overtone Cepheids.The most stringent cut in practice is that we require individual iron abundance measurements based on high-resolution spectroscopy for all sample stars (Genovali et al. 2014(Genovali et al. , 2015)).The final sample of fundamental-mode classical Cepheids for W G contains 225 stars and is listed in Table 7.
We compiled ground-based photometry in the Johnson V and Cousins I bands from Groenewegen (2018) and Breuval et al. (2020Breuval et al. ( , 2021)).This dataset has been homogenized by Groenewegen ( 2018) and was studied extensively.It mainly includes V− and I−band data reported by Mel'nik et al. (2015), which are based on observations by L. Berdnikov (e.g., Berdnikov 2008).Reddening values, E(B − V) for Galactic Cepheids are taken from Fernie et al. (1995) and scaled by a factor of 0.94 following Groenewegen (2018).We also computed reddening-free Wesenheit magnitudes (Madore 1982) using V and I−band data, W V I (cf.below).
We collected Gaia DR3 photometry in Gaia G band, as well as integrated Bp and Rp spectrophotometry (Ripepi et al. 2022b;Riello et al. 2021).Specifically, we used intensity-averaged magnitudes from Gaia CU7 Specific objects studies (parameters int_average_g, int_average_g_error and analogous for Bp and Rp from table gaiadr3.vari_cepheid)published as part of the Gaia DR3 variability analysis for Cepheids (Ripepi et al. 2022b;Eyer et al. 2022).We also computed reddening-free Wesenheit magnitudes, W G , based on G, Bp, and Rp as stated below.
Finally, we collected HST WFC3-IR F160W photometry for MW Cepheids from Riess et al. (2019) and Ripepi et al. (2022c), as well as their reported reddening-free NIR Wesenheit magnitudes W H . Benefits of this homogeneous HST dataset include the excellent calibration of the HST photometric system, homogeneity with respect to extragalactic Cepheids, high spatial resolution, and the lack of time-and location-specific calibration issues typical of ground-based NIR photometry.We also experimented with ground-based near-IR photometry available from a range of literature references following Breuval et al. (2021), notably combining ground-based J, H, K s photometry from Laney & Stobie (1992), Monson &Pierce (2011), andGenovali et al. (2014).However, the homogenization of these data sets is not as straightforward due to different photometric systems in use (e.g., improvements in detector technology), the calibration of atmospheric absorption in the NIR, and the standardization of NIR passbands.After some tests, and notably in comparison with the Notes.The constraints relate to parameters given in Gaia DR3 data tables gaia_source and vari_cepheid.Astrometric constraints a are applied to all Cepheids used in this work and reproduce the sample of Cepheids used by Riess et al. (2021).astrometric_chi2_al quantifies the goodness of fit in the along-scan direction without taking into account astrometric_excess_noise.Positive values of astrometric_excess_noise indicate that the source may not be astrometrically well behaved, and this excess noise may be relevant if astrometric_excess_noise_sig > 2. Since currently, a detailed guidance for how to use these parameters is lacking, we adopted very conservative cuts to remove the clearest outliers.Photometric constraints are applied only to the sample of Cepheids for which Gaia photometry is used.
In particular, the parameter ipd_frac_multi_peak specifies the percentage of multiply peaked Gaia windows that were accepted by the image parameter determination.We adopted a constraint in this parameter to avoid blending of the Cepheid photometry with nearby sources, which particularly applies to the Bp and Rp spectrophotometry.An overview of these samples is given in Table 7.
a Descriptions available here: https://gea.esac.esa.int/archive/documentation/GDR3/Gaia_archive/chap_datamodel/sec_dm_main_source_catalogue/ssec_dm_gaia_source.htmlHST F160W photometry available from Riess et al. (2019), we discarded ground-based NIR photometry as not sufficiently accurate for the purposes of our study.HST WFC3-IR observations are subject to count-rate nonlinearity (CRNL) at the level of 0.0075 ± 0.006 mag/dex (Riess et al. 2019).We took these CRNL corrections into acount when we compared Cepheid samples spanning a significant dynamic range, that is, when we compared MW Cepheids to extragalactic Cepheids, such as those in the LMC, or when we compared them to Cepheids in supernova-host galaxies (SN-hosts).CRNL corrections to offset differences among MW Cepheids alone are at the level of 1 − 2 mmag and were therefore neglected.
We used the following definitions for Wesenheit magnitudes W V I (Breuval et al. 2022), W H (Riess et al. 2016), and W G (Ripepi et al. 2019): Extinction corrections were applied using reddening coefficients calculated for a Fitzpatrick (1999) reddening law with R V = 3.3 and a spectral energy distribution representative of a 10 d Cepheid near the center of the instability strip (cf.Anderson 2022) as given by a Castelli & Kurucz (2003) model atmosphere with T eff = 5400K, [Fe/H]= 0.0, log g = 1.5.Specifically, this yields R V Johnson = 3.553, R I Cousins = 2.095, R F160W = 0.674 and R Bp = 3.701, R G = 2.991, R Rp = 2.196, where the subscript F160W refers to the HST WFC3-IR system.All filter profiles were downloaded from the Spanish VO filter profile service4 .These values were used in conjunction with color excess values defined for Johnson-Cousins E(B − V) to estimate extinction in the respective bands.We also compiled individual iron abundances from the literature ensuring a common solar iron abundance (cf.Sec.4.2).
Confirming the adequacy of L21 parallax corrections for
cluster parallaxes using the LMC Lindegren et al. ( 2021) provided a recipe for correcting systematic parallax errors related to source magnitude, color, and skyposition (ecliptic latitude) based on millions of quasars and LMC stars as well as 7000 bright physical stellar pairs.However, previous articles have presented evidence that residual parallax offsets need to be applied even after the L21 corrections are applied.For example, Riess et al. (2021) determined an additional constant parallax offset of 14 ± 6 µas based on 75 Galactic Cepheids in the magnitude range 6 < G < 12, and these residual offsets are now well documented using different methods and stellar types (e.g., Zinn et al. 2019;Zinn 2021;Khan et al. 2019;Schönrich et al. 2019;Stassun & Torres 2021;Ren et al. 2021;Wang et al. 2022;Flynn et al. 2022).Hence, an accurate LL calibration based on Cepheid parallaxes requires solving for the residual offset applicable to the sample of stars used in the calibration.However, recent work based on open and globular clusters as well as the Magellanic Clouds has shown that the L21 recipe accurately corrects parallax systematics of stars fainter than G > 13 mag (Flynn et al. 2022;Maíz Apellániz 2022).As a result, average cluster parallaxes based on L21-corrected member stars in this magnitude range are particularly useful for LL calibration because no further offsets need to be determined, that is, ∆ Cl = 0. Average cluster parallaxes can therefore inform the residual parallax offset applicable to Cepheid parallaxes, ∆ Cep .This is done in Sect.4.3.However, prior to adopting ∆ Cl = 0, we decided to verify the validity of this approach using observations of Cepheids in the LMC, whose distance µ DEBs is known with an accuracy of 1.3% from detached eclipsing binary stars (Pietrzyński et al. 2019).
We compiled Johnson-Cousins V− and I−band photometry of LMC Cepheids from the OGLE-III catalog of variable stars (Soszyński et al. 2017) and selected fundamental-mode OGLE-III Cepheids within the matching period range of cluster Cepheids (3.9 − 45 d) and cross-matched their positions (maximum search radius 2 ) with Gaia DR3 positions to obtain Gaia G−band, Bp, and Rp photometry from the SOS Cepheid list (Ripepi et al. 2022b, gaiadr3.vari_cepheid).The accuracy of the cross-match was verified by considering the agreement between periods reported by OGLE and Gaia.We adopted the OGLE-III Cepheid sample instead of the Gaia DR3 list of Cepheids in the LMC direction because a) geometric corrections (cf.below) are well described for this sky region (Pietrzyński et al. 2019), and b) the classification of Cepheids in OGLE-III benefits from longer time series and long-standing human experience in classification.Since OGLE-III covers the main part of the LMC disk, and thus the bulk of the Cepheid population, including outer regions from Gaia is not expected to add a significantly greater number of Cepheids outweighing the downsides related to the geometric correction.We used reddening maps based on red clump stars (Skowron et al. 2021) to correct for extinction using the values of R λ mentioned in Sect.4.1 and the conversion E(V − I) = 0.686 • E(B − V) derived analogously.
For our NIR analysis, we used HST WFC3 observations of 70 LMC Cepheids (Riess et al. 2019) because they can be directly compared to the HST observations of MW cluster Cepheids (Riess et al. 2022a) after the appropriate CRNL corrections are applied.Because the NIR Wesenheit magnitudes, W HS T H , reported by Riess et al. (2019) already include a CRNL correction applicable for the comparison with Cepheids in the SN-host sample, we recomputed W H using Eq. 5 and their original HST observations in the individual passbands F555W, F814W, and F160W.We then applied appropriate CRNL corrections (average of 0.010 mag) to account for the flux difference of 0.9 − 1.8 dex between MW cluster and LMC Cepheids.
We applied geometric corrections to apparent magnitudes following Jacyszyn-Dobrzeniecka et al. (2016), effectively treating all LMC Cepheids as though they were observed at the same distance, determined to an accuracy of 1.3% using detached eclipsing binary systems (Pietrzyński et al. 2019).As a result of this correction, the effect of the LMC intrinsic depth on the observed scatter in the LL is minimized.This is necessary due to the large sky region covered by OGLE-III (1.7 kpc) and to ensure that the distance estimate to the LMC reflects the distance to the barycenter of the detached eclipsing binaries.Moreover, the correction decreases the observed scatter in the LL, resulting in a slight (∼ 0.004 mag) improvement in the uncertainties for the LL intercept β.
For LMC Cepheids, we fit linear LLs of the form m = α(log P − log P 0 ) + β using a least-squares fitting procedure and a 2.7σ outlier rejection (applying Chauvenet's criterion for the HST LMC Cepheid sample); m denotes apparent magnitudes corrected to the LMC barycenter.Depending on the photometric data set, the samples used in the fit contained between 68 and 712 LMC Cepheids.The results for a range of individual photometric bands and Wesenheit magnitudes are listed in Table 8, including the number of available Cepheids, and the assumed intrinsic width of the LL.
To determine the validity of the expected ∆ Cl = 0, we computed the absolute magnitude of LMC Cepheids by applying the distance modulus measured obtained using detached eclipsing binaries (Pietrzyński et al. 2019, µ DEB = 18.477 ± 0.004 (stat) ± 0.026 (syst) mag), These absolute magnitudes of LMC metallicity Cepheids were then compared to MW Cepheids using the astrometry-based luminosity (Arenou & Luri 1999, ABL), which avoids the issue of inverting parallaxes to obtain distances, ABL = 10 Superscript in Eq. 8 implies that β and δ are given in apparent magnitudes after applying geometric corrections to the LMC Cepheids.β denotes the LL intercept at the average sample metallicity, δ = β − γ[Fe/H] is the LL intercept corrected to solar metallicity, and ∆ [Fe/H] is the difference in iron abundance between the MW and LMC Cepheid samples.Table 9 lists the results of this comparison for six individual photometric bands and three Wesenheit formulations.
The metallicity difference between LMC and MW Cepheid requires careful consideration.For the LMC, we adopted a common mean iron abundance, [Fe/H] LMC = −0.409± 0.003, based on the recently remeasured average iron abundances of LMC Cepheids that has been shown to be consistent with a single value (Romaniello et al. 2022, dispersion 0.076 dex).For MW cluster Cepheids, we adopted individual iron abundances as described above and compiled in Table 10.Although several improvements in the determination of γ have been recently presented (Gieren et al. 2018;Breuval et al. 2021;Breuval et al. 2022;Ripepi et al. 2022a), we here preferred to use γ as a free parameter, while first fixing ∆ Cl = 0 and then performing the same comparison while fitting for γ and ∆ Cl simultaneously.The individual MW Cepheid abundances are compiled in Table 10.
Our results for γ listed in Table 9 show that metal-rich Cepheids are typically brighter than metal poor Cepheids in each of the photometric bands as well as the three Wesenheit formulations.This echoes recent results by Breuval et al. (2022), albeit at lower precision because the metallicity range we considered is limited.Additionally, as noted by Breuval et al., our results are consistent with predictions of γ derived from Geneva stellar evolution models (Anderson et al. 2016b).We further confirm the particularly strong metallicity dependence in Gaia G band and the Gaia Wesenheit function W G reported by Breuval et al. (2022) and Ripepi et al. (2022a), while neither Bp nor Rp exhibit such a steep trend with metallicity.
Concerning ∆ Cl , we find residual offsets consistent with 0 to within 1σ in all nine cases, and a weighted mean value of Notes.β is expressed here in apparent magnitudes.The last column indicates the intrinsic width of the LL due to the finite width of the instability strip (WIS), adopted from Breuval et al. (2022).Superscript a indicates that no HST F160W-IR CRNL corrections were applied to observations of LMC Cepheids for this comparison.Magnitudes in the NIR Wesenheit function were recomputed using Eq. 5 based on the observations reported by Riess et al. (2019).Notes.Wesenheit magnitudes of cluster Cepheids, W H , were computed using Eq. 5 using the photometric data for the individual passbands presented by Riess et al. (2022a).Cluster parallaxes were bias corrected using the L21 approach.Superscript a indicates that HST WFC-IR CRNL corrections have been applied to account for the 0.9 to 1.8 dex difference in flux among MW cluster Cepheids and LMC Cepheids (mean correction 0.010 mag).The weighted mean and associated uncertainty for ∆ Cl is −4 ± 6 µas.
∆ = −4 ± 6 µas.Additionally, we note that the values of γ obtained when fixing ∆ Cl = 0 are consistent to within their uncertainties with γ values obtained when both parameters are free, as well as with recent literature results.In summary, our comparison involving the LMC thus strongly supports that the average cluster parallaxes determined above exhibit no evidence of residual parallax offsets beyond the L21 corrections.
Galactic LL and residual parallax offset for Cepheids
We calibrated the Milky Way LL and the residual parallax offset applicable to MW Cepheid parallaxes, ∆ Cep , using our Gold sample of cluster Cepheids.We note that the following exclusively considers MW Cepheid information and is thus independent of the LMC, which was merely used as a cross-check in Sect.4.2.We fit the MW LL while simultaneously determining the residual parallax offset for Cepheids, ∆ Cep , using for Cepheids, ( 10) Both LL slope and zeropoint were used as free parameters, and ∆ Cl = 0 as explained above.
We first performed this fit at the sample average iron abundance and then repeated the fit assuming a fixed value of γ from the literature, specifically, γ W H = −0.217± 0.046 (Riess et al. 2022b) and γ W G = −0.384± 0.051 (Breuval et al. 2022).We used individual Cepheid iron abundances, not the sample average, to determine the zeropoint at solar metallicity, δ.Using fixed literature slopes for γ has the significant benefit of γ being informed by a wider range of metallicities, while both the range of [Fe/H] in the MW sample and the correction to the solar value are small.Although we propagated the errors, this metallicity correction has virtually no effect on the final results due to the only slightly supersolar metallicity of MW Cepheids.Following common practice (e.g., Kodric et al. 2018;Riess et al. 2022b), we applied a 2.7σ outlier rejection.This step removed 24 of 249 Cepheids for the Gaia-only sample, the vast majority of which are > 3σ outliers.The ABL fit results are illustrated in Figures 7 and 8. .46.5 6.6 5.293 ± 0.005 5.313 ± 0.029 0.07 ± 0.07 G15 0.312 ± 0.050 V0367 Sct 6.295 513.6 2.7 6.7 7.2 5.865 ± 0.033 6.125 ± 0.054 0.05 ± 0.08 G15 V0378 Cen ( * ) 6.459 517.8 3.7 6.5 7.5 5.468 ± 0.002 0.08 ± 0.06 LL11 0.374 ± 0.049 V0379 Cas ( * ) 4.305 556.6 2.2 6.5 6.9 5.876 ± 0.005 0.06 ± 0.08 L11 V0438 Cyg 11.211 561.9 3.6 6.4 7. Notes.Cluster average parallaxes include the corrections as described by L21.Iron abundances were rescaled by Genovali et al. (2015) to the common solar abundance A(Fe) = 7.50 (Grevesse & Sauval 1998).Color excess values E(B-V) are taken from Fernie et al. (1995) and scaled by a factor 0.94 (cf.Groenewegen 2018).The symbol ( * ) denotes Cepheids pulsating in the first overtone mode.a : Observations reported in the HST system (W H ) are computed using Eq. 5 and the individual passband data from Riess et al. (2022a), that is, they do not contain the CRNL correction needed for comparison with the SN-host Cepheid sample.We note that CRNL corrections (∼ 0.05 mag) were applied to the apparent WFC3/IR F160W and NIR Wesenheit magnitudes to facilitate the comparison with Cepheids in supernova-host galaxies and simplify the comparison with the SH0ES distance ladder.
Both results establish a nonzero residual parallax offset for MW Cepheid parallaxes at 3σ significance, and this result is fully consistent with the −14 ± 6 µas offset determined by Riess et al. (2021).This provides additional evidence that clusters and Cepheids require different residual parallax offsets.
To directly compare our results to the value of M W H,1 determined as part of the SH0ES distance ladder (Riess et al. 2022b,a), we fixed the LL slope to the SH0ES baseline value Our result for δ agrees to within 0.3σ with the value of M W H,1 determined by the SH0ES team via the two-parameter Gold sample fit in Table 5 of Riess et al. (2022a), where M W H,1 = −5.907± 0.018 mag.Nevertheless, our approach to determine δ using the NIR Wesenheit function W H (Eq. 14) differs from their approach in three important elements.First, we used a combined fit of Cepheid and cluster parallaxes to obtain an absolute calibration based exclusively on Gaia astrometry.Second, our clustering analysis in Sect.2.1 was conducted entirely independently of Riess et al. (2022a).Third, the samples of cluster member stars differ between our study and Riess et al. (2022a), resulting in an average difference of ∼ 5 µas among cluster parallaxes.We therefore consider our result an important cross-check based on mostly independent astrometric information.
For the corresponding Gaia Wesenheit function (W G ) at sample average metallicity, we obtain β = −6.051± 0.020, α = −3.303± 0.049, (20) and, after correcting to solar metallicity using the individual Cepheid iron abundances, δ = −6.004± 0.019, α = −3.242± 0.047, ( 23) We thus find 1σ agreement for ∆ Cep regardless of whether HST or Gaia photometry is used, and using different, albeit not independent, sets of Cepheids and cluster parallaxes.In particular, we note the improved precision on ∆ Cep determined using Gaia photometry, for which we obtain a 6σ detection that is consistent with the value determined using the independent HST photometry.We further note that metallicity corrections do not challenge the accuracy of our determination of ∆ Cep .To illustrate our results in a more conventional LL form, we plot the absolute Wesenheit magnitudes as a function of log P in Figure 9.
We further applied the same approach for Johnson V−band, Gaia G, Bp, and Rp, and HST F160W photometry.The results are listed in Table 11.In particular, we note that the value of ∆ Cep is consistent within less than 1σ for all nine rows in Table 11. Figure 10 illustrates the results for individual photometric passbands together with linear fits of the LL parameters as a function of the inverse of the effective central wavelength λ of each filter.The average iron abundances of the samples differ by < 0.02 dex, and we thus expect a difference of ∼ 0.02 dex • 0.2 mag/dex = 0.004 mag at most between the values of β evaluated at the lower and upper metallicity of our sample.This difference is well contained within the uncertainties.Fitting the wavelength dependence of α and β as a function of inverse wavelength, we determine the following dependence of LL slope and zeropoint on central wavelength: α = (−3.769± 0.083) + (0.683 ± 0.059)/λ (26) β = (−6.526± 0.056) + (1.208 ± 0.041)/λ . (27)
Using Silver sample Cepheids for LL calibration
Our criteria placed two long-period Cepheids with uncertain cluster membership, X Pup and XZ Car, in the Silver sample, which we conservatively did not use for LL calibration.As explained in Sec.3.2, both stars featured low membership likelihoods due primarily to mismatching kinematic information.However, closer inspection suggested that X Pup is possibly a true cluster Cepheid that can be used for LL calibration (cf.Sec.3.2).We determine the impact of including these stars in our analysis below.
Including X Pup and XZ Car in the cross-check of involving the LMC (Sect.4.2) would not significantly affect the results.For W G we find ∆ Cl = −8 ± 17 µas, γ = −0.418± 0.150 mag/dex, and for W H , we obtain ∆ Cl = −7 ± 16 µas, γ = −0.205± 0.148 mag/dex.All these values agree to within much less than one standard deviation with those obtained using only the Gold sample of Cepheids.
Including XZ Car in the combined LL fit in Sec.4.3 has no impact because it is a 3.5σ LL outlier that would be rejected by the σ−clipping procedure.Including X Pup in the fit does not significantly affect the LL calibration (α = −3.313± 0.049, β = −6.051± 0.020, ∆ Cep = −21 ± 3 all agree to much better than 1σ with results in Eq. 20) and marginally increases the reduced χ 2 by 0.008.Furthermore, X Pup has not been identified as an LL outlier by Riess et al. (2022a) in the NIR Wesenheit formulation.
Fraction of Cepheids in clusters within 2 kpc
The fraction of Cepheids residing in clusters is of interest for understanding clustered star formation (Dinnbier et al. 2022) and the extragalactic distance scale (Anderson & Riess 2018), among other things.Using our Gold sample of cluster Cepheids and data from the recent Gaia DR3, we updated previous estimates of this fraction, f CC,2kpc = N Cl,2kpc /N Cep,2kpc .Assuming that all Cepheid-hosting clusters within 2 kpc could be identified by our method, we have N Cl,2kpc = 22 (Gold sample), which includes 11 coronal members separated by projected distances of 8 − 25 pc from their host cluster centers.
We estimated the total number of Cepheids within 2 kpc, N Cep,2kpc using the photometric parallaxes obtained with our W G Article number, page 18 of 24 LL calibration for all stars classified as DCEP in Gaia DR3 table gaiadr3.vari_cepheid.This yields 180 +32 −38 fundamental mode Cepheids as well as 70 +9 −16 first-overtone or multimode Cepheids, where overtone periods were fundamentalized using the period ratios determined by Kovtyukh et al. (2016, assuming a mean metallicity [Fe/H] = 0.032).For multimode Cepheids, either the fundamental or first-overtone period was used to compute the distance.We also sought to estimate N Cep,2kpc using distances provided by the parameter distance_gspphot in Gaia DR3 table gaiadr3.gaia_sourceas well as Gaia parallaxes (including the residual offset determined in Eq. 24).However, this reduced the size of Cepheid samples by approximately 20% due to limited data availability.We therefore considered the estimation based on photometric distances our baseline result due to greater completeness.The results are tabulated in Table 12, where asymmetric uncertainties reflect the range of stars defined by the 1σ distance or parallax uncertainties.
We thus estimate f CC,2kpc = 0.088 +0.029 −0.019 , where the uncertainties provided denote the full range of possibilities.We further find a slightly higher fraction of fundamental mode Cepheids in clusters, with f CC,2kpc,FM = 0.089 +0.030 −0.018 and f CC,2kpc,FO = 0.081 +0.026 −0.023 , assuming the OGLE classification of cluster Cepheids (Pietrukowicz et al. 2021).If the pulsation modes assigned in Gaia DR3 (Ripepi et al. 2022b) were used instead, the difference would be slightly larger, with f CC,2kpc,FM = 0.100 +0.033 −0.020 and f CC,2kpc,FO = 0.065 +0.021 −0.022 .This difference could be explained by the dependence of f CC on age due to clusters dissolving into the field over time combined with the tendency of overtone Cepheids to originate from older lower-mass stars than fundamental-mode Cepheids, which can be rather young.
We note that a few bright Cepheids, such as Polaris and the cluster Cepheid U Sgr, are not included in the vari_cepheid table.However, their absence does not change the overall result.Our new estimate supersedes our previous slightly lower estimate of f CC,2kpc = 15/217 = 6.9% reported in Dinnbier et al. (2022) due to improvements in our membership determination and the input data from Gaia DR3.
Figure 11 illustrates the fraction of Cepheids residing in clusters within 2 kpc of the Sun as a function of age.Cepheid ages were computed using period-age relations for fundamental and first-overtone Cepheids (Anderson et al. (2016b)).We confirmed that ages based on periods of overtone Cepheids matched ages computed using period-age relations for fundamental-mode Cepheids after fundamentalizing the pulsation periods of firstovertone Cepheids using period ratios of Milky Way doublemode Cepheids (Kovtyukh et al. 2016).Figure 11 thus illustrates the dispersal of Cepheid host clusters over time, an effect previously reported by Anderson & Riess (2018) and also seen in dynamical NBODY simulations (Dinnbier et al. 2022).We caution that young ages are rather poorly sampled within 2 kpc of the Sun due to the low volumetric rate of long-period Cepheids.At ages above 132 Myr, no cluster Cepheids are found within 2 kpc of the Sun.
Expected improvements
Astrometric uncertainties tend to increase with distance, complicating the identification of distant open clusters.For Gaia EDR3, the number of false cluster detections at distances greater than 3 kpc increases rapidly, so that significant work is required to ascertain the veracity of the recovered cluster candidates.However, upcoming Gaia data releases will improve the ability to correctly identify clusters at large distances, which can be expected to result in much improved cluster Cepheid samples with Gaia DR4 and beyond.Whereas Gaia EDR3 was based on 34 months of observations, the DR4 astrometric solution of Gaia will be based on approximately 66 months of observations, and the Gaia Collaboration expects improvements in proper motion proportional to t −3/2 and in parallax proportional to t −1/2 .Hence, DR4 proper motion uncertainties may be about 0.35 times their DR3 uncertainties, whereas DR4 parallax uncertainties could be approximately 0.70 times those reported in DR3.As Eq. 1 illustrates (cf.also footnote 3), the ability of detecting clusters against the background depends on distance and proper motion uncertainties.However, it is unlikely that the full gain in proper motion precision will directly map to a greater volume limit for detecting clusters because parallax errors improve less rapidly.To obtain a rough estimate of future improvements, we therefore considered a mean improvement by a factor of approximately 2 (counting parallax and both proper motion directions separately), which would double the distance within which cluster Cepheids can be detected.Based on their location in the Galactic plane, the num- Notes.The upper and lower indexes are an estimate of the maximum and minimum number of Cepheids, they are not standard errors, and for this reason, they are not are not added in quadrature.ber of clusters increases proportional to d 2 , resulting in a potential quadrupling of cluster-hosting Cepheids with DR4, and thus, in a potential improvement of a factor of 2 for the LL calibration.Since most long-period Cepheids are located at distances beyond 2 kpc, this will be particularly useful to increase the number of these high-priority targets.
Calibrating the cosmic distance ladder to within 1% requires parallaxes of Cepheids measured to an accuracy of ∼ 5 µas (Riess et al. 2021).At present, cluster Cepheids appear to be the most viable route to this goal.However, the angular covariance of the (E)DR3 parallaxes currently still sets an error floor of ∼ 7 µas and is therefore in urgent need of further improvement.It is very noteworthy that cluster members apparently do not require residual parallax offset corrections, since solving for this offset has thus far limited the power of Gaia parallaxes for measuring H 0 (e.g., Riess et al. 2018;Riess et al. 2021).Additionally, new HST observations of cluster Cepheids will be crucial to avoid uncertainties related to photometric transformations from the ground to the HST system.In summary, identifying new cluster Cepheids and measuring their photometry using HST will provide the most accurate basis for calibrating the distance ladder for a 1% H 0 measurement.We are optimistic that future Gaia data releases will continue to improve the error floor set by angular covariance and that other mitigation strategies can be identified to leverage the power of Gaia for the extragalactic distance scale and cosmology.
Conclusions
We carried out a systematic search for MW cluster Cepheids using Gaia EDR3 and DR3 data.The improved proper motion precision of EDR3 over DR2 allowed us to obtain a more detailed and accurate view of cluster membership for previously discussed cluster Cepheids.Since our method requires no advance knowledge of clusters being present in the vicinity of Cepheids, we a) determined cluster astrometry without the need for prior literature search on the host clusters, and b) avoided confusion of cluster identification in case of complex sky areas featuring multiple clusters.We thus established a Gold sample of 34 Cepheids residing in 28 distinct MW open clusters.They include the three new bona fide cluster Cepheids ST Tau, V0378 Cen, and GH Lup.Additionally, we corrected the host cluster identification for three Cepheids previously discussed in the literature, namely SX Vel, IQ Nor, and VW Cru.We find SV Vul to be a bona fide cluster Cepheid that falls squarely on the Galactic LL.We find three Silver sample cluster Cepheid candidates of interest, of which X Pup is a likely cluster Cepheid, whereas the XZ Car cluster membership is tentatively excluded by kinematic constraints and the AP Vel parallax narrowly contradicts membership in Ruprecht 65.Additional combinations of possible interest are included in a Bronze sample.
Using photometric distances of Cepheids in the Gold sample and the concatenated list of Cepheids from Pietrukowicz et al. (2021) and Ripepi et al. (2022b), we estimate the fraction of clustered Cepheids within 2 kpc to be in the range of f CC,2kpc = N Cl,2kpc /N Cep,2kpc = 0.088 +0.029 −0.019 .We find a slightly larger fraction for Cepheids pulsating in the fundamental mode compared to the first overtone, which may be related to the dependence of f CC on age and cluster dispersal timescales.
Cluster parallaxes are superior for LL calibration compared to individual Cepheid parallaxes because cluster member stars combine several benefits, including a) greater statistical precision, b) better systematics in a fainter magnitude range that does not require special processing, c) the absence of high-amplitude variability, and d) greater consistency in brightness and color with LMC stars and quasars used to determine the EDR3 parallax systematics (L21).The uncertainty of average cluster parallaxes is currently dominated by angular covariance, which limits average parallax uncertainties to 7 µas, although the statistical uncertainty can be as low as 1.4 µas.
We identified the magnitude and color ranges of 12.5 < G < 17 mag and 0.23 < Bp − Rp < 2.75 as a sweet spot for determining average cluster parallaxes.Previous studies (e.g,.Flynn et al. 2022;Maíz Apellániz 2022) found that parallaxes of cluster member stars in this magnitude range are adequately corrected by the L21 recipes, and we cross-checked this result using Cepheids in the LMC, taking the metallicity difference between MW and LMC Cepheids into account.Using the LL metallicity slope γ yielded negative values in six individual photometric passbands and three reddening-free Wesenheit magnitudes, confirming recent results by Breuval et al. (2022).Allowing for a nonzero offset for cluster parallaxes yields a weighted average of ∆ Cl = −4 ± 6 µas, with each individual offset consistent with 0 to within 1σ.Hence, we confirm that cluster parallaxes determined using member stars in this magnitude and color range require no further correction of residual parallax offsets beyond the L21 corrections.We stress that the LMC was used only for comparison and does not otherwise enter the results of this study.
Setting ∆ cl = 0, we calibrated the Galactic Cepheid LL in the several passbands and reddening-free Wesenheit magnitudes while simultaneously solving for a residual parallax offset of Gaia parallaxes of Cepheids, ∆ Cep .In particular, we calibrated the absolute luminosity scale of 10 d fundamentalmode Cepheids at solar metallicity to a precision of 0.94% using NIR HST Wesenheit magnitudes and to a precision of 0.87% using optical Gaia Wesenheit magnitudes.The LL slope and metallicity effect from the SH0ES analysis provide the most direct comparison of our results of relevance for the Hubble constant and reveals excellent (0.3σ) agreement with the recent results by Riess et al. (2022a).Using NIR HST and optical Gaia Wesenheit magnitudes, we obtained ∆ Cep = −17 ± 5 and −19 ± 3 µas, respectively.This 7σ measurement of the residual parallax offset for Cepheids is the most precise to date and provides strong independent confirmation of the Cepheid parallax offset of −14 ± 6 µas measured by the SH0ES team.
Cluster Cepheids can play a crucial role for the measurement of H 0 by providing an accurate absolute trigonometric scale based on Gaia astrometry without the need to solve for further offsets while determining the Hubble constant.Future developments, such as improved proper motion membership constraints for cluster detection through the longer astrometric baselines of Gaia in future data releases, improved corrections of the Gaia parallax systematics and angular covariance, and high-quality photometry of MW Cepheids in and out of clusters will particularly improve the base calibration of the distance scale toward a 1% Hubble constant measurement.
Fig. 1 .
Fig. 1.Schematic overview of the pipeline designed to detect cluster Cepheids.
Fig. 2 .
Fig. 2. Difference between individual and cluster average parallax for all member stars considered.Left: Comparison of the parallax difference as a function of the G magnitude, where the number of stars per bin is color-coded according to the color bar on the right.Right: Same as the left plot, but as a function of the color Bp − Rp.The vertical dotted lines in both panels illustrate the magnitude and color range we used to estimate the cluster parallaxes.
Fig. 3 .
Fig. 3. Position in the sky, position in the proper motion space, and color magnitude diagram for different cluster Cepheids.Background stars are shown in gray, and the cluster membership probability is color-coded.Light colors indicate high probability.Cepheids are shown as labeled using large filled red circles.Cepheids detected as cluster members by HDBSCAN also feature an overplotted symbol to illustrate membership probability.
Fig. 4 .
Fig. 4. Graphical representation of membership constraints for specific examples of cluster Cepheids in the Silver (XZ Car) and Bronze (BB Cen, V620 Pup) samples, as well as two rejected associations.
Fig. 5 .
Fig.5.Cluster Czernik 41 and the Cepheid ATO J297.7863+25.3136.The cluster is a clear overdensity on the sky and in the proper motion space.However, the CMD does not exhibit a clean main sequence with member stars at atypically red colors, indicating high extinction, which is also reflected by varying levels of background stars amid the gray points on the left.
Fig. 6 .
Fig. 6.ABL for fundamental-mode Cepheids in the Gold sample using different photometric systems.Open circles indicate the two Cepheids of the Silver sample that are not part of the LL fits.The ABL values and the residuals were shifted by constant offsets as indicated in the legend to facilitate visual inspection.Cluster parallaxes were determined after applying the L21 parallax corrections.
Fig. 7 .Fig. 8 .
Fig. 7. ABL for W H based on HST photometry for the joint sample of Gold cluster Cepheids (N = 15) and the Cepheids in the R21a sample (N = 67).Black error bars are derived using Gaia EDR3 parallaxes of Cepheids, and colored error bars are based on cluster parallaxes.Specific cases are colored individually to help identify Cepheids with cluster parallaxes discussed in the text.U Sgr, S Nor, and SV Vul appear twice in the plot because we use the Cepheid and cluster parallaxes to estimate its ABL.The Cepheids in the Silver sample XZ Car and X Pup are not included in the fit.In the plot, the zeropoint offset of the Cepheids has been already applied.0.02 0.04 0.06 0.08 0.10 0.12 0.14
MFig. 9 .
Fig. 9. LL in the H and G Wesenheit bands.Given the high precision of the Cepheid parallaxes, their individual distances were calculated as 1/ .The plots are shown for illustration purposes, and they were not used to fit the data.
Fig. 10 .
Fig. 10.Linear fit of the LL parameters as a function of the inverse of the effective wavelength in different photometric filters.
Fig. 11 .
Fig. 11.Clustered Cepheid fraction as a function of Cepheid age estimated using the period-age relations for solar metallicity(Anderson et al. 2016b).The size of the error bars illustrates the full range of possible fractions.Different numbers of bins were used to illustrate the dependence on binning.Young long-period Cepheids are rare within 2 kpc of the Sun, increasing the scatter at ages below 80 Myr.No Cepheids older than 132 Myr are found in clusters within 2 kpc of the Sun.
Fig. A.1.Proper motion dispersion of the detected clusters in the Gold, Silver, and Bronze samples.
due to the presence of
Table 1 .
Radial velocity information for clusters and Cepheids
Table 3 .
Silver sample of cluster Cepheids.
Table 4 .
Bronze sample of cluster Cepheids.
Table 6 .
Astrometric and photometric constraints applied to the MW Cepheid sample.
Table 7 .
MW Cepheid samples used to calibrate the Galactic Cepheid LL in various passbands . . . . . . . . . . . . . . . . . . ...Notes.The complete version of this table is available at the CDS. is the Cepheid parallax as obtained from Gaia DR3, and corr lists the parallax corrected for the L21 offset.
Table 9 .
Metallicity term γ and zeropoint offset ∆ obtained by comparing Gold sample cluster Cepheids to the LMC LL using Equation (9).
Table 10 .
Information used for determining the Galactic LL using cluster Cepheids.
Table 12 .
Number of Cepheids within 2 kpc of the Sun.
Table A.1.Cluster members of the Gold, Silver, and Bronze samples.The complete version of this table is available at the CDS.corr is the parallax corrected applying the L21 offset, and corr is the corresponding value of the correction. Notes. | 17,304 | 2022-08-19T00:00:00.000 | [
"Physics"
] |
Recommendations for Processing Head CT Data
Many research applications of neuroimaging use magnetic resonance imaging (MRI). As such, recommendations for image analysis and standardized imaging pipelines exist. Clinical imaging, however, relies heavily on X-ray computed tomography (CT) scans for diagnosis and prognosis. Currently, there is only one image processing pipeline for head CT, which focuses mainly on head CT data with lesions. We present tools and a complete pipeline for processing CT data, focusing on open-source solutions, that focus on head CT but are applicable to most CT analyses. We describe going from raw DICOM data to a spatially normalized brain within CT presenting a full example with code. Overall, we recommend anonymizing data with Clinical Trials Processor, converting DICOM data to NIfTI using dcm2niix, using BET for brain extraction, and registration using a publicly-available CT template for analysis.
INTRODUCTION
Many research applications of neuroimaging use magnetic resonance imaging (MRI). MRI allows researchers to study a multitude of applications and diseases, including studying healthy volunteers. Clinical imaging, however, relies heavily on X-ray computed tomography (CT) scans for diagnosis and prognosis. Studies using CT scans cannot generally recruit healthy volunteers or large non-clinical populations due to the radiation exposure and lack of substantial benefit. As such, much of head CT data is gathered from prospective clinical trials or retrospective studies based on health medical record data and hospital picture archiving and communication system (PACS). We discuss transforming this data from clinical to research data and provide some recommendations and guidelines from our experience with CT data similar insights from working with MRI studies. We will discuss existing software options, focusing on open-source tools, for neuroimaging in general and those that are specific to CT throughout the paper.
We will focus on aspects of quantitatively analyzing the CT data and getting the data into a format familiar to most MRI neuroimaging researchers. Therefore, we will not go into detail of imaging suites designed for radiologists, which may be proprietary and quite costly. Moreover, we will be focusing specifically on non-contrast head CT data, though many of the recommendations and software is applicable to images of other areas of the body.
The pipeline presented here is similar to that of Dhar et al. (2018). We aim to discuss the merits of each part of the pipeline with a set of choices that have available code. In addition, we present a supplement with a working example, including code, to go from DICOM data to a spatially-normalized brain image. We also touch on points relevant to de-identification of the data, not only from DICOM metadata, but also removing identifiable information from the image itself such as the face. Overall, we aim to discuss the suite of tools available, many of which built specifically for MRI, but provide slight modifications if necessary to have these work for head CT.
DATA ORGANIZATION
Most of the data coming from a PACS is in DICOM (Digital Imaging and Communications in Medicine) format. Generally, DICOM files are a combination of metadata (i.e., a header) about an image and the individual pixel data, many times embedded in a JPEG format. The header has a collection of information, usually referred to as fields or tags. Tags are usually defined by a set of 2 hexadecimal numbers, which are embedded as 4 alphanumeric characters. For example, (0008,103E) denotes the SeriesDescription tag for a DICOM file. Most DICOM readers extract and use these tags for filtering and organizing the files. The pixel data is usually given in the axial orientation in a high resolution (e.g., 0.5 mm 2 ) grid of 512 x 512 pixels.
We will use the phrase scanning session (as opposed to "study" and reserve study to denote a trial or analysis), a series for an individual scan, and a slice for an individual picture of the brain. Each series (Series Instance UID tag) and scanning session (Study Instance UID tag) should have a unique value in the DICOM header that allows DICOM readers to organize the data by scanning session and series. The following sections will discuss data organization and data formats.
DICOM Anonymization
One of the common issues with DICOM data is that a large amount of protected health information (PHI) can be contained in the header. DICOM is a standard where individual fields in the header contain the same values across different scanners and sites, but only if that manufacturer and site are diligent to ascribing to the DICOM standard. Though many DICOM header fields are consistent across neuroimaging studies, a collection of fields may be required to obtain the full amount of data required for analysis. Moreover, different scanning manufacturers can embed information in non-standard fields. The goal is to remove these fields if they contain PHI, but retain these fields if they embed relevant information of the scan for analysis. These fields then represent a challenge to anonymization without loss of crucial information if the data do not conform to a standard across scanning sites, manufacturers, or protocols.
We will discuss reading in DICOM data and DICOM header fields in the next section. Reading DICOM data may be necessary for extracting information, but many times the data must be transferred before analysis. Depending on the parties receiving the data, anonymization of the data must be done first. Aryanto et al. (2015) provides a look at a multitude of options for DICOM anonymization and recommend the RSNA MIRC Clinical Trials Processor (CTP, https://www.rsna.org/ research/imaging-research-tools) a, cross-platform Java software, as well as the DICOM library (https://www.dicomlibrary. com/) upload service. We also recommend the DicomCleaner cross-platform Java program as it has similar functionality. Bespoke solutions can be generated using dcm4che (such as dcm4che-deident, https://www.dcm4che.org/) and other DICOM reading tools (discussed below), but many of these tools have built-in capabilities that are difficult to add (such as removing PHI embedded in the pixel data, aka "burned in").
A Note on De-identification: Time Between Scans
Although most of the presented solutions are good at anonymization and de-identification of the header information, only a few such as CTP, have the utilities required for longitudinal preservation of date differences. Dates are considered removable identifiable information under HIPAA, some clinical trials and other studies rely on serial CT imaging data, and the differences between times are crucial to determine when events occur or are used in analysis.
Publicly Available Data
With the issues of PHI above coupled with the fact that most CT data is acquired clinically and not in a research setting, there is a dearth of publicly available data for head CT compared to head MRI. Sites for radiological training such as Radiopedia (https://radiopaedia.org/) have many cases of head CT data, but these are converted from DICOM to standard image formats (e.g., JPEG) so crucial information, such as Hounsfield Units and pixel dimensions, are lost.
Large repositories of head CT data do exist, though, and many in DICOM format, with varying licenses and uses. The CQ500 (Chilamkurthy et al., 2018) dataset provides approximately 500 head CT scans with different clinical pathologies and diagnoses, with a non-commercial license. All examples in this article use data from 2 subjects within the CQ500 data set. The Cancer Imaging Archive (TCIA) has hundreds of CT scans, many cases with brain cancer. TCIA also has a RESTful (representational state transfer) interface, which allows cases to be downloaded in a programmatic way; for example, the TCIApathfinder R package (Russell, 2018) and Python tciaclient module provide an interface. The Stroke Imaging Repository Consortium (http://stir.dellmed.utexas.edu/) also has head CT data available for stroke. The National Biomedical Imaging Archive (NBIA, https://imaging.nci.nih.gov) demo provides some head CT data, but are mostly duplicated from TCIA. The NeuroImaging Tools & Resources Collaboratory (NITRC, https://www.nitrc. org/) provides links to many data sets and tools, but no head CT images at this time. The RIRE (Retrospective Image Registration Evaluation, http://www.insight-journal.org/rire/) and MIDAS (http://www.insight-journal.org/midas) projects have small set of publicly available head CT (under 10 participants).
Reading DICOM Data
Though MATLAB has an extensive general imaging suite, including SPM (Penny et al., 2011), we will focus on R (R Core Team, 2018) Python (Python Software Foundation, https:// www.python.org/), and other standalone software. The main reasons are that R and Python are free, open source, and have a lot of functionality with neuroimaging and interface with popular imaging suites. We are also lead the Neuroconductor project (https://neuroconductor.org/) (Muschelli et al., 2018), which is a repository of R packages for medical image analysis. Other imaging platforms such as the Insight Segmentation and Registration Toolkit (ITK) are well-maintained, useful pieces of software that can perform many of the operations that we will be discussing. We will touch on some of this software with varying levels. We aim to present software that we have had used directly for analysis or preprocessing. Also, other papers and tutorials discuss the use of these tools in analysis (https://neuroconductor. org/tutorials).
For reading DICOM data, there are multiple options. The oro.dicom (Whitcher et al., 2011) and radtools (Russell and Ghosh, 2019) R packages, pydicom Python module (Mason, 2011), MATLAB imaging toolbox, and ITK (Schroeder et al., 2003) interfaces can read DICOM data amongst others. The DICOM toolkit dcmtk (Eichelberg et al., 2004) has multiple DICOM manipulation tools, including dcmconv to convert DICOM files to other imaging formats. Though most imaging analysis tools can read in DICOM data, there are downsides to using the DICOM format. In most cases, a DICOM file is a single slice of the full 3D image series. This separation can be cumbersome on data organization if using folder structures. As noted above, these files also can contain a large amount of PHI. Some image data may be compressed, such as JPEG2000 format. Alternatively, if data are not compressed, file storage is inefficient. Most importantly, many imaging analyses perform 3-dimensional (3D) operations, such as smoothing. Thus, putting the data into a different format that handles 3D images as 1 compressed file is desirable. We present examples of reading DICOM data above, but generally recommend using 3D imaging formats and using the above tools to read the DICOM header information.
Converting DICOM to NIfTI
Many different general 3D medical imaging formats exist, such as ANALYZE, NIfTI, NRRD, and MNC. We recommend the NIfTI format, as it can be read by nearly all medical imaging platforms, has been widely used, has a format standard, can be stored in a compressed format, and is how much of the data is released online. Moreover, we will present specific software to convert DICOM data and the recommended software (dcm2niix) outputs data in a NIfTI file.
Many sufficient and complete solutions exist for DICOM to NIfTI conversion. Examples include dicom2nifti in the oro.dicom R package, pydicom, dicom2nifti in MATLAB, and using large imaging suites such as using ITK image reading functions for DICOM files and then write NIfTI outputs. We recommend dcm2niix (https://github.com/ rordenlab/dcm2niix) (Li et al., 2016) for CT data for the following reasons: (1) it works with all major scanners, (2) incorporates gantry-tilt correction for CT data, (3) can handle variable slice thickness, (4) is open-source, (5) is fast, (6) is an actively maintained project, and (7) works on all 3 major operating systems (Linux/OSX/Windows). Moreover, the popular AFNI neuroimaging suite includes a dcm2niix program with its distribution. Interfaces exist, such as the dcm2niir (Muschelli, 2018) package in R and nipype Python module (Gorgolewski et al., 2011). Moreover, the divest package (Clayden and Rorden, 2018) wraps the underlying code for dcm2niix to provide the same functionality of dcm2niix, along with the ability to manipulate the data for more versatility.
We will describe a few of the features of dcm2niix for CT. In some head CT scans, the gantry is tilted to reduce radiation exposure to non-brain areas, such as the eyes. Thus, the slices of the image are at an oblique angle. If slice-based analyses are done or an affine registration (as this tilting is a shearing) are applied to the 3D data, this tilting may implicitly be corrected. This tilting causes issues for 3D operations as the distance of the voxels between slices is not correct and especially can show odd visualizations ( Figure 1A). The dcm2niix output returns both the corrected and non-corrected image (Figure 1). As the correction moves the slices to a different area, dcm2niix may pad the image so that the entire head is still inside the field of view. As such, this may cause issues with algorithms that require the 512 x 512 axial slice dimensions. Though less common, variable slice thickness can occur in reconstructions where only a specific area of the head is of interest. For example, an image may have 5 mm slice thicknesses throughout the image, except for areas near the third ventricle, where slices are 2.5 mm thick. To correct for this, dcm2niix interpolates between slices to ensure each image has a consistent voxel size. Again, dcm2niix returns both the corrected and non-corrected image.
Once converted to NIfTI format, one should ensure the scale of the data. Most CT data is between −1024 and 3071 Hounsfield Units (HU). Values less than −1024 HU are commonly found due to areas of the image outside the field of view that were not actually imaged. One first processing step would be to Winsorize the data to the [−1024, 3071] range. After this step, the header elements scl_slope and scl_inter elements of the NIfTI image should be set to 1 and 0, respectively, to ensure no data rescaling is done in other software. Though HU is the standard format used in CT analysis, negative HU values may cause issues with standard imaging pipelines built for MRI, which typically have positive values. Rorden (CITE) proposed a lossless transformation, called Cormack units, which have a minimum value of 0. The goal of the transformation is to increase the range of the data that is usually of interest, from −100 to 100 HU and is implemented in the Clinical toolbox (discussed below). Most analyses are done using HU, however.
Convolution Kernel
Though we discuss CT as having more standardized Hounsfield unit values, this does not imply CT scans cannot have vastly different properties depending on parameters of scanning and reconstruction. One notable parameter in image reconstruction is the convolution kernel [i.e., filter, DICOM field (0018,1210)] used for reconstruction. We present slices from an individual subject from the CQ500 (Chilamkurthy et al., 2018) dataset in Figure 2. Information on which kernel was used, and other reconstruction parameter information can be found in the DICOM header. The kernel is described usually by the letter "H" (for head kernel), a number indicating image sharpness (e.g., the higher the number, the sharper the image, the lower the number, the smoother the image), and an ending of "s" (standard), "f " (fast), "h" for high resolution modes (Siemens SOMATOM Definition Application Guide), though some protocols simply name them "soft-tissue, " "standard, " "bone, " "head, " or "blood, " amongst others. The image contrast can depend highly on the kernel, and "medium smooth" kernels (e.g., H30f, H30s) can provide good contrast in brain tissue ( Figure 2E). Others, such FIGURE 1 | Example of gantry-tilt correction. Using "dcm2niix," we converted the DICOM files to a NIfTI file, which had a 30 degree tilt. The output provides the uncorrected image (A) and the tilt-corrected image (B). We see that the reconstructed image without correction appears fine within the axial plane, but out of plane has an odd 3D shape. This shape will be corrected with an affine transformation, which is done in the conversion, correcting the image as seen in (B).
as "medium" kernels (e.g., H60f, H60s) provide contrast in high values of the image, such as detecting bone fractures (Figure 2A), but not as good contrast in brain tissue ( Figure 2B). Thus, when combining data from multiple sources, the convolution kernel may be used to filter, stratify, or exclude data.
Moreover, the noise and image contrast can be different depending on the image resolution of the reconstruction. Most standard head CT scans have high resolution within the axial plane (e.g., 0.5 x 0.5 mm). Image reconstructions can have resolution in the interior-superior plane (e.g., slice thickness) anywhere from 0.5mm (aka "thin-slice, " Figure 2F) to 2.5 mm, to 5 mm, where 5 mm is fairly common. The larger the slice thicknesses are, the smoother the reconstruction (as areas are averaged). Also, the added benefit for radiologists and clinicians are that fewer slices are needed to be reviewed for pathology or to get a comprehensive view of the head. In research, however, these thin-slice scans can get better estimates of volumes of pathology, such as a hemorrhage (CITE), or other brain regions. Moreover, when performing operations across images, algorithms may need to take this differing resolution, and therefore image dimensions, into account. We will discuss image registration in the data preprocessing as one way to harmonize the data dimensions, but registration does not change the inherent smoothness or resolution of the original data.
In some instances, only certain images are available for certain subjects. For example, most of the subjects have a noncontrast head CT with a soft-tissue convolution kernel, whereas some only have a bone convolution kernel. Post-processing smoothing can be done, such as 3D Gaussian (Figure 2C) or anisotropic (Perona-Malik) smoothing (Perona and Malik, 1990; Figure 2D). This process changes the smoothness of the data, contrast of certain areas, can cause artifacts in segmentation, but can make the within-plane properties similar for scans with bone convolution kernel reconstructions compared to soft-tissue kernels in areas of the brain (Figure 2E).
Contrast Agent
Though we are discussing non-contrast scans, head CT scans with contrast agent are common. The contrast/bolus agent again should be identified in the DICOM header field (0018,0010), but may be omitted. The contrast changes CT images, especially where agent is delivered, notably the vascular system of the brain ( Figure 2G). These changes may affect the steps recommended in the next section of data preprocessing, where thresholds may need to be adjusted to include areas with contrast which can have higher values than the rest of the tissue (e.g., > 100 HU; Figure 2G).
DATA PREPROCESSING
Now that the data is in a standard file format, we can discuss data preprocessing. As the data are in NIfTI format, most software built for MRI and other imaging modalities should work, but adaptations and other considerations may be necessary.
Bias-Field/Inhomogeneity Correction
In MRI, the scan may be contaminated by a bias field or set of inhomogeneities. This field is generally due to inhomogeneities/inconsistencies in the MRI coils or can be generated by non-uniform physical effects on the coils, such as heating. One of the most common processing steps done first is to remove this bias field. In many cases, these differences can more general be considered non-uniformities, in the sense that the same area with the same physical composition and behavior may take on a different value if it were in a different spatial location of the image. Though CT data has no coil or assumed bias field due to the nature of the data, one can test if trying to harmonize the data spatially with one of these correction procedures improves performance of a method. Though we do not recommend this procedure generally, as it may reduce contrasts between areas of interest, such as hemorrhages in the brain, but has been used to improve segmentation (Cauley et al., 2018). We would like to discuss potential methods and CT-specific issues.
Overall, the assumptions of this bias field are that it is multiplicative and is smoothly varying. One of the most popular inhomogeneity corrections are the non-parametric nonuniformity normalization (e.g., N3; Sled et al., 1998) and its updated improvement N4 (Tustison et al., 2010) in ANTs, though other methods exist in FSL (Zhang et al., 2001) and other software (Ashburner and Friston, 1998;Belaroussi et al., 2006). Given the assumption of the multiplicative nature of the field, N4 performs an expectation-maximization (EM) algorithm on the log-transformed image, assuming a noise-free system. As CT data in HU has negative values, the log transform is inappropriate. Pre-transforming or shifting the data values may be necessary to perform this algorithm, though these transforms may affect performance. Moreover, artifacts or objects (described below), such as the bed, may largely effect the estimation of the field and segmentation may be appropriate before running these corrections, such as brain extraction or extracting only subject-related data and not imaged hardware. The ANTsR package (https://github.com/ANTsX/ANTsR) provides the n4BiasFieldCorrection function in R; ANTsPy (https://github.com/ANTsX/ANTsPy) and NiPype (Gorgolewski et al., 2011) provide n4_bias_field_correction and N4BiasFieldCorrection in Python, respectively.
Brain Extraction in CT
Head CT data typically contains the subject's head, face, and maybe neck and other lower structures, depending on the field of view. Additionally, other artifacts are typically present, such as the pillow the subject's head was on, the bed/gurney, and any instruments in the field of view. We do not provide a general framework to extract the complete head from hardware, but provide some recommendations for working heuristics. Typically the range of data for the brain and facial tissues are within −100 to 300 HU, excluding the skull, other bones, and calcificiations. Creating a mask from values from the −100 to 1000 HU range tends to remove some instruments, the pillow, and the background. Retaining the largest connected component will remove high values such as the bed/gurney, filling holes (to include the skull), and masking the original data with this resulting mask will return the subject (Figure 3).
Note, care must be taken whenever a masking procedure is used as one standard way is to set values outside an area of interest to 0. With CT data 0 HU is a real value of interest: if all values are set to 0 outside the mask, the value of 0 is aliased to both 0 HU and outside of mask. Either transforming the data into Cormack units, adding a value to the data (such as 1025) then setting values to 0, or using NaN are recommended in values not of interest.
One of the most common steps in processing imaging of the brain is to remove non-brain structures from the image. Many papers present brain extracted CT images, but do not always disclose the method of extraction. We have published a method that uses the brain extraction tool (BET) from FSL, originally built for MRI, to perform brain extraction (Muschelli et al., 2015) with the CT_Skull_Strip function in the ichseg R package (Muschelli, 2019). An example of this algorithm performance on a 5 mm slice, non-contrast head CT with a soft-tissue convolution kernel is seen in Figure 3, which extracts the relevant areas for analysis. Recently, convolutional neural networks and shape propagation techniques have been quite successful in this task (Akkus et al., 2018) and models have been released (https:// github.com/aqqush/CT_BET). Overall, much research can still be done in this area as traumatic brain injury (TBI) and surgery, such as craniotomies or craniectomies, can cause these methods to potentially fail. Overall, however, large contrast between the skull and brain tissue and standardized Hounsfield Units can make brain segmentation an easier task than in MRI.
Tissue-Class Segmentation
In many structural MRI applications, the next step may be tissue-class segmentation, denoting areas of the cerebrospinal FIGURE 3 | Human and brain extraction results. Here we present a 5 mm slice, non-contrast head CT with a soft-tissue convolution kernel. The left figure represents the CT image, showing all the areas imaged, overlaid with the extracted head mask as described in the section of "Brain Extraction in CT." The right hand side is the image overlaid with a brain mask. The brain mask was created using an adaptation of the Brain Extraction Tool (BET) from FSL, published by Muschelli et al. (2015). fluid (CSF), white matter and gray matter. Though Cauley et al. (2018) provides an example of tissue-class segmentation of CT scans using available software (intended for MRI) (Zhang et al., 2001), we will not cover them in detail here. One potential issue is the contrast between white and gray matter is much lower than compared to MRI T1-weighted imaging. Rather than tissueclass segmentation, a number of examples exist of determining CSF space from CT, including scans with pathology (Hacker and Artmann, 1978;Liu et al., 2010;Li et al., 2012;Poh et al., 2012;Ferdian et al., 2017;Patel et al., 2017;Dhar et al., 2018). These methods sometimes segment the CSF from the brain, including areas of the subarachnoid space, only the ventricles, or some combination of the two. Moreover, these CT-specific methods have not released open-source implementations or trained models for broad use.
Removal of Identifiable Biometric Markers: Defacing
As part of the Health Insurance Portability and Accountability Act (HIPAA) in the United States, under the "Safe Harbor" method, releasing of data requires the removal a number of protected health information (PHI) (Centers for Medicare & Medicaid Services, 1996). For head CT images, a notable identifier is "Full-face photographs and any comparable images". Head CT images have the potential for 3D reconstructions, which likely fall under this PHI category, and present an issue for reidentification of participants (Schimke and Hale, 2015). Thus, removing areas of the face, called defacing, may be necessary for releasing data. If parts of the face and nasal cavities are the target of the imaging, then defacing may be an issue. As ears may be a future identifying biometric marker, and dental records may be used for identification, these areas may desirable to remove (Cadavid et al., 2009;Mosher, 2010).
The obvious method for image defacing is to perform brain extraction we described above. If we consider defacing to be removing parts the face, while preserving the rest of the image as much as possible, this solution is not sufficient. Additional options for defacing exist such as the MRI Deface software (https://www.nitrc.org/projects/mri_deface/), which is packaged in the FreeSurfer software and can be run using the mri_deface function from the freesurfer R package (Bischoff-Grethe et al., 2007;Fischl, 2012). We have found this method does not work well out of the box on head CT data, including when a large amount of the neck is imaged.
Registration methods involve registering images to the CT and applying the transformation of a mask of the removal areas (such as the face). Examples of this implementation in Python modules for defacing are pydeface (https://github.com/ poldracklab/pydeface/tree/master/pydeface) and mridefacer (https://github.com/mih/mridefacer). These methods work since the registration from MRI to CT tends to performs adequately, usually with a cross-modality cost function such as mutual information. Other estimation methods such as the Quickshear Defacing method rely on finding the face by its relative placement compared to a modality-agnostic brain mask (Schimke and Hale, 2011). The fslr R package implements both the methods of pydeface and Quickshear. The ichseg R package also has a function ct_biometric_mask that tries to remove the face and ears based registration to a CT template (described below). Overall, removing potential biometric markers from imaging data should be considered when releasing data and a number of methods exist, but do not guarantee complete de-identification and may not work directly with CT without modification.
Registration to a CT Template
Though many analyses in clinical data may be subject-specific, population-level analyses are still of interest. Some analyses want spatial results at the population-level, which require registration to a population template. One issue with these approaches is that most templates and approaches rely on an MRI template. These templates were developed by taking MRI scans of volunteers, which again is likely unethical with CT due to the radiation exposure risk without other benefits. To create templates, retrospective searches through medical records can provide patients who came in with symptoms warranting a CT scan, such as a migraine, but had a diagnosis of no pathology or damage. Thus, these neuro-normal scans are similar to that of those collected those in MRI research studies, but with some important differences. As these are retrospective, inclusion criteria information may not be easily obtainable if not clinically collected, scanning protocols and parameters may vary, even within hospital and especially over time, and these patients still have neurological symptoms. Though these challenges exist, with a large enough patient population and a research consent at an institution, these scans can be used to create templates and atlases based on CT. To our knowledge, the first publicly available head CT template exists was released by Rorden et al. (2012), for the purpose of spatial normalization/registration.
One interesting aspect of CT image registration is again that CT data has units within the same range. To say they are uniformly standardized is a bit too strong as tomography and other confounds can impact units. Thus, it is our practice to think of them as more standardized than MRI. This standardization may warrant or allow the user different search and evaluation cost functions for registration, such as least squares. We have found that normalized mutual information (NMI) still performs well in CT-to-CT registration and should be at least considered when using CT-to-MRI or CT-to-PET registration. Along with the template above, Rorden et al. (2012) released the Clinical toolbox (https://github.com/neurolabusc/Clinical) for SPM to allow researchers to register head CT data to a standard space. However, as the data are in the NIfTI format, almost all image registration software should work, though one should consider transforming the units using Cormack units or other transformations as negative values may implicitly be excluded in some software built for MRI registration. We have found using diffeomorphic registrations such as symmetric normalization (SyN) from ANTs and ANTsR with NMI cost functions to perform well. We present results of registering the head CT presented in brain extraction to the template from Rorden et al. (2012) using SyN in Figure 4.
In some cases, population-level analyses can be done, but while keeping information at a subject-specific level. For example, registration from a template to a subject space can provide information about brain structures that can be aggregated across people. For example, one can perform a label fusion approach to CT data to infer the size of the hippocampus and then analyze hippocampi sizes across the population. Numerous label fusion approaches exist (Collins and Pruessner, 2010;Langerak et al., 2010;Sabuncu et al., 2010;Asman and Landman, 2013;Wang et al., 2013), but rely on multiple templates and publicly available segmented CT images are still lacking. Additionally, the spatial contrast in CT is much lower than T1-weighted MRI for image segmentation. Therefore, concurrent MRI can be useful. One large issue is that any data gathered with concurrent MRI the high variability in MRI protocol done if it is not generally standardized within or across institution. We see these limits as a large area of growth and opportunity in CT image analysis.
Pipeline
Overall, our recommended pipeline is as follows: 1. Use CTP or DicomCleaner to organize and anonymize the DICOM data from a PACS. 2. Extract relevant header information for each DICOM, using software such as dcmdump from dcmtk and store, excluding PHI. 3. Convert DICOM to NIfTI using dcm2niix, which can create brain imaging data structure (BIDS) formatted data (Gorgolewski et al., 2016). Use the tilt-corrected and data with uniform voxel size.
After, depending on the purpose of the analysis, you may do registration then brain extraction, brain extraction then FIGURE 4 | Image registration result. Here we displayed the scan (A) registered to a CT template (B) from Rorden et al. (2012). The registration by first doing an affine registration, followed by symmetric normalization (SyN), a non-linear registration implemented in ANTsR. The registration was done with the skull on the image and the template. We see areas of the image that align generally well, but may not be perfect.
registration, or not do registration at all. If you are doing analysis of the skull, you can also use brain extraction as a first step to identify areas to be removed. For brain extraction, run BET for CT or CT_BET (especially if you have GPUs for the neural network). If registration is performed, keeping the transformations back into the native, subject space is usually necessary as many radiologists and clinicians are comfortable to subject-specific predictions or segmentations. Converting the data from NIfTI back to DICOM is not commonly done, but is possible as most PACS are built for DICOM data.
CONCLUSIONS
We present a simple pipeline for preprocessing (see Data Sheet 1) of head CT data, along with software options of reading and transforming the data. We have found that many tools exist for MRI and are applicable to CT data. Noticeable differences exist between the data in large part due to the collection setting (research vs. clinical), data access, data organization, image intensity ranges, image contrast, and population-level data. As CT scans provide fast and clinically relevant information and with the increased interest in machine learning in medical imaging data, particularly deep learning using convolutional neural networks, research and quantitative analysis of head CT data is bound to increase. We believe this presents an overview of a useful set of tools and data for research in head CT. For research using head CT scans to have the level of interest and success as MRI, additional publicly available data needs to be released. We saw the explosion of research in MRI, particularly functional MRI, as additional data were released and consortia created truly large-scale studies. This collaboration is possible at an individual institution, but requires scans to be released from a clinical population, where consent must be first obtained, and upholding patient privacy must be a top priority. Large internal data sets likely exist, but institutions need incentives to release these data sets to the public. Also, though institutions have large amounts of rich data, general methods, and applications require data from multiple institutions as parameters, protocols, and population characteristics can vary widely.
One of the large hurdles after creating automated analysis tools or supportive tools to help radiologists and clinicians is the reintegration of this information into the healthcare system. We do not present answers to this difficult issue, but note that these tools first need to be created to show cases where this reintegration can improve patient care, outcomes, and other performance metrics. We hope the tools and discussion we have provided advances those efforts for researchers starting in this area.
All of the code used to generate the figures in this paper is located at https://github.com/muschellij2/process_head_ct. The code uses packages from Neuroconductor in R. All data presented was from the CQ500 data set, which can be downloaded from http://headctstudy.qure.ai/dataset.
DATA AVAILABILITY
Publicly available datasets were analyzed in this study. This data can be found here: http://headctstudy.qure.ai/dataset.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication. | 8,097.6 | 2019-09-04T00:00:00.000 | [
"Medicine",
"Physics"
] |
Molecularly defined extraintestinal pathogenic Escherichia coli status predicts virulence in a murine sepsis model better than does virotype, individual virulence genes, or clonal subset among E. coli ST131 isolates
ABSTRACT Background: Escherichia coli ST131, mainly its H30 clade, is the leading cause of extraintestinal E. coli infections but its correlates of virulence are undefined. Materials and methods: We tested in a murine sepsis model 84 ST131 isolates that differed by country of origin (Spain vs. USA), clonal subset, resistance markers, and virulence genes (VGs). Virulence outcomes, including illness severity score (ISS) and “killer” status (>80% mouse lethality), were compared statistically with clonal subset, individual and combined VGs, molecularly defined extraintestinal and uropathogenic E. coli (ExPEC, UPEC) status, and country of origin. Results: Virulence varied widely by strain. Univariable correlates of median ISS and percent “killer” (outcomes if variable present vs. absent) included pap (ISS, 4.4 vs. 3.8; “killer”, 71% vs. 46%), kpsMII (4.1 vs. 2.3; 59% vs. 25%), K2/K100 (4.4 vs. 3.2; 77% vs. 41%), ExPEC (4.2 vs. 2.2; 62% vs. 17%), Spanish origin (4.3 vs. 3.1; 65% vs. 36%), and H30R1 subset (2.5 vs. 4.1; 35% vs. 59%). With multivariable adjustment, ExPEC status was the only consistently significantly predictive variable. Conclusion: Within ST131 the strongest predictor of experimental virulence was molecularly defined ExPEC status. Clonal subsets seemed to behave differently in the murine sepsis model by country of origin.
Background
The pandemic extraintestinal pathogenic Escherichia coli (ExPEC) clone ST131 is a major contributor to the increasing incidence of extraintestinal E. coli infections, mainly bloodstream and urinary tract infections, especially those caused by fluoroquinolone-resistant or extended-spectrum beta-lactamase (ESBL)-producing strains. ST131 also occurs in the gut microbiota of healthy and institutionalized persons [1].
Like other E. coli clones from virulence-associated phylogroup B2, ST131 exhibits a broad range of genes that encode known or suspected virulence factors, hence are called virulence genes (VGs). Such VGs, which contribute to adherence, colonization, invasion, and/or persistence in the host, include siderophores (iutA, fyuA, iroN), adhesins (fimH, pap, afa/dra, iha, yfcV), toxins (sat, vat), protectins (traT, iss, capsule variants), and miscellaneous elements (cvaC, ompT, usp, malX). ST131's VGs have been proposed as a possible reason for its dramatic dissemination and clinical emergence. Moreover, some lineages within ST131 have been associated with sepsis, worse clinical outcomes, and errors in empirical treatment [5,6]. Whether for ST131 particular VGs or combinations thereof are required for, or associated with, successful colonization, establishment of infection, or progression to severe disease is unclear.
Studies to date of the experimental virulence of ST131 in diverse animal hosts (mice, zebrafish, Caenorhabditis elegans, and Galleria mellonella) have yielded conflicting results [7][8][9][10]. Moreover, several authors have identified specific combinations of VGs, or "virotypes", within ST131 [8,11,12] that in some studies predicted experimental virulence [8]. However, interpretation of these studies is impeded by their small sample size, inconsistent selection of VGs, limited attention to ST131 clonal subsets, and diversity of animal models, including some of the uncertain relevance to human infections.
Because of the importance of identifying potentially virulent ST131 strains, we sought here to identify among E. coli ST131 isolates associations of experimental virulence with diverse bacterial traits that could act as markers for said virulence, whether or not they directly determine it. For this, we used an established murine sepsis model and a comparatively large collection of well-characterized ST131 isolates of diverse ecological and geographical origins. We then compared experimental virulence results with the strains' country of origin, virulence genotype, ST131 clonal subset, ESBL genotype, and fluoroquinolone resistance status.
Study collection and subtyping
A convenience sample of 84 diverse ST131 E. coli isolates from various previously published collections from our group was analyzed [9,[13][14][15][16][17]. Geographical origin, year of testing, and ecological source are shown in Table 1. Whereas all isolates from Spain were tested in 2014, 81% of the isolates from USA were tested pre-2014. Because the control strains yielded consistent results across experiments and years (not shown), we assumed that any variation associated with the year of testing was due to geographical factors. Accordingly, to avoid possible bias, we also analyzed the variable "country", despite its close correlation to "year of testing".
Experimental virulence
In vivo virulence was assessed previously using a wellestablished murine subcutaneous sepsis model [13,26] at the Minneapolis Veteran Affairs Medical Center (MN, USA) according to animal use protocol 120,603, as approved by the local Institutional Animal Care and Use Committee. The sepsis model results for the present isolates were reported elsewhere [13,14].
For this model, female pathogen-free outbred Swiss-Webster mice were inoculated subcutaneously with approximately 10 9 CFU/mL log-phase bacteria in 0.2 mL saline, as described previously [13,26]. Mouse health was assessed twice daily for 3 days post-challenge by experienced researchers unaware of strain identity, following a strict protocol and using positive control strain CFT073 (high lethality) and negative control strain MG1655 (zero illness or lethality). Mice were classified daily as to maximal illness severity, which ranged from 1 (healthy) to 5 (dead), with intermediate scores 2 (barely ill), 3 (moderately ill), and 4 (severely ill). Results for the controls were consistent during all the experiments, regardless of year (data not shown). The variables used as metrics of the study isolates' virulence potential included overall mean illness severity score (ISS), a continuous variable obtained by averaging the daily illness severity scores for the mice challenged with a given isolate, and "killer" status, defined based on death of ≥80% of the challenged mice [15]. Each test strain was assessed initially in five mice, followed by another five mice if the initial testing did not yield a consistent result (i.e., lethality or survival for four or five of the initially challenged mice). To minimize the risk of a possible cohort bias, mice from a given shipment were assigned to different test strains using a formal randomization scheme.
Statistically significant values are in bold. frequencies and percentages and were compared using a chi-square test or Fisher's exact test, as appropriate. The criterion for statistical significance was p < 0.05, with Bonferroni correction for multiple comparisons. To avoid possible bias involving the variables "year of testing" and "country of origin", analyses were repeated after stratification by year and country. Univariable and multivariable regression analysis (simple regression, for ISS; logistic regression, for "killer" status) were used to assess the predictive power of independent variables with and without adjustment for collinearity between them. For multivariable analysis, only those bacterial traits were included that in univariable analyses predicted one or both of the experimental virulence outcomes, whether overall or after stratification by year or country. For use in multivariable modeling, the qualifying univariable predictors were divided into a core set (ExPEC status, belonging to the H30R1 clonal subset, and year of testing) and a supplementary set (VG score, genes pap, kpsMII, K2/K100).
Two methods were used for variable entry into the multivariable models, i.e., forced and stepwise. The forced-entry method was applied first to only the core set of candidate predictor variables, then to the core plus supplementary variable sets combined. The stepwise method was applied to both variable sets combined. Data were analyzed using STATA (Stata Statistical Software: Release 11. College Station, TX: StataCorp LP).
Of the 49 studied VGs and variants, 35 were detected in at least one isolate each. VGs were distributed significantly by clonal subset (Table 2). However, with stratification by year of testing and country of origin, the only statistically significant differences involved kpsMII and ibeA (Supplementary material tables S1-S4). The mean VG score was 11.8 (SD 1.9) overall but was lower among H30R1 isolates (mean 10.2, SD 1.2) than among H30Rx (mean 12.6, SD 1.7) and non-H30 isolates (mean 12.1, SD 1.9) (p < 0.01, H30R1 vs. H30Rx or non-H30). Even with stratification by country of origin and year of testing, these differences remained statistically significant. By contrast, within a given clonal subset, VG scores did not differ significantly by year of testing or country of origin (Supplementary material Table S5).
The 35 detected VGs occurred in 38 distinct combinations (38 VG profiles; Figure 1). Whereas most profiles (66%, 25/38) occurred in a single isolate each, the two most common profiles were repeated 9 and 10 times each. The heatmap based on VG content showed four main clusters (Cluster1-4) of related VG profiles, which corresponded roughly with virotypes and clonal subsets. Cluster 1 and 2 grouped mainly H30Rx isolates and corresponded mostly with virotype E and A, respectively. Cluster 3 split into two main subclusters; one grouped all non-H30 isolates (all virotype D3), whereas the other grouped a mix of isolates from different clonal backgrounds (virotype C). Finally, Cluster 4 grouped mainly H30R1 isolates and corresponded mostly with virotype C. These four clusters differed mostly for presence/ absence of pap, kpsMII, specific group 2 capsular variants, hra, afa/dra, ibeA, and traT ( Figure 1).
Experimental virulence outcomes vs. bacterial characteristics
In the murine sepsis model, ISS was fairly high overall (median 3.9, on a 1-5 scale), but varied greatly by isolate (IQR 2.2), with approximately half (54%, 44/84) of isolates qualifying as "killers". ISS and "killer" status were significantly associated (median ISS: "killers", 4. By contrast, H30Rx isolates tested in 2014 or from Spain showed significantly higher ISS and were more often "killer" than H30Rx isolates tested pre-2014 or from USA (Supplementary material Table S8). Neither ESBL production nor FQ resistance was associated significantly with ISS or "killer" status.
Virulence vs. individual VGs (split by year/country)
Overall, of the 49 studied individual VGs, pap, kpsMII, and K2/K100 were associated significantly with ISS and "killer" status; the median ISS and percent "killer" were significantly higher for isolates with vs. without the particular gene (Table 5). However, with stratification by year or country, many of these comparisons lost statistical significance or differed inconsistently by subset (not shown). Table S9). By contrast, UPEC status was unsuitable for statistical analysis due to its 98% overall prevalence. Overall, VG score was correlated only weakly with ISS (rho = 0.29, p = 0.008), and was slightly higher among "killer" isolates (median score: 12 ["killers"], vs. 11 [others], p = 0.03). With stratification by year of testing, VG score was not associated with either virulence endpoint in either subgroup. With stratification by country, the correlation of VG score with ISS was only marginally statistically significant among isolates from the USA (rho = 0.37, p = 0.03) and was not significant among isolates from Spain (rho = 0.18, p = 0.23). By contrast, VG score was not associated with "killer" status in either subgroup (data not shown).
Virotype was not associated with experimental virulence (Table 4). Due to already-small group sizes, these Year of testing correlated roughly with country of origin (rho = 0.84, p < 0.001), so can be considered a surrogate for that trait.
analyses were nor stratified by year and country. Finally, aggregate VG profiles (n = 38) grouped isolates with very different experimental virulence (Figure 1), without statistically significant virulence differences between profiles (p = 0.16). Additionally, the exploration of diverse combinations of VGs, as selected based on inspection of the heatmap, identified none other than ExPEC status that significantly predicted experimental virulence (not shown).
Multivariable analysis
Given the multiple significant univariable predictors of virulence, and these variables' associations with one another, multivariable analysis (with both forced entry and forward stepwise entry) was used to clarify primary associations and to allow adjustment for year of testing, which served as a proxy for the country of origin. Candidate predictor variables included a core set (H30R1, ExPEC status, year 2014) and a supplemental set based on VGs (pap, kpsMII, K2, VG score). For predicting ISS, the initial forced-entry multiple regression model, with candidate predictors ExPEC, H30R1, and year 2014, identified as significant predictors both ExPEC (beta 1.4, p < 0.001) and year 2014 (beta 0.61, p = 0.02); H30R1 lost significance (beta 0.22, p = 0.50). The extended forced-entry model, which included the four supplemental variables as additional candidate predictors, identified ExPEC as the only significant multivariable predictor (beta 1.72, p = 0.001). The stepwise model, in which all seven variables were included as candidate predictors, yielded substantially similar results: the only significant predictors retained in the final model were ExPEC status (beta 1.2, p < 0.001) and, with lower predictive power and marginal statistical significance, year 2014 (beta 0.57, p = 0.02).
Discussion
In this study we analyzed a large collection of wellcharacterized E. coli ST131 isolates for associations between experimental virulence, as assessed in a murine sepsis model, and diverse bacterial traits, including clonal subsets, resistance markers, and virulence genotype. For this, we analyzed virulence genotype in multiple ways, including as both individual VGs and various combinations of VGs, i.e., molecular ExPEC and UPEC status, virotype, aggregate VG profile, and VG score.
Despite all strains being ST131, their experimental virulence in the murine sepsis model varied widely, both overall and within most subsets of the population, as defined based on diverse bacterial characteristics (e.g., clonal subsets, virotypes or VG score). This provided an opportunity to search for bacterial traits that correspond with experimental virulence.
According to the univariable analyses, four types of bacterial traits (i.e., specific clonal backgrounds, individual VGs, and VG combinations, and country of origin/tested in 2014) were significantly associated with ISS and "killer" status. To summarize: first, of the studied clonal subsets, H30R1 was associated negatively with both virulence endpoints, whereas a subset of H30Rx isolates (those from Spain or tested in 2014) was associated positively with these endpoints. Second, among the 49 studied VGs, pap, kpsMII and K2/K100 were associated positively with one or both virulence endpoints, although these associations varied by year and/or country. Third, of the studied VG combinations, only molecularly defined ExPEC status (robustly) and VG score (marginally) were associated (both, positively) with the virulence endpoints. Lastly, experimental virulence also varied overall in relation to year of testing (a surrogate for country of origin), probably due in part to country-specific differences in clonal subset distribution. Indeed, with stratification by clonal subset, only H30Rx isolates exhibited this by-year (i.e., bycountry) difference in experimental virulence.
Previously reported results for a subset of the present isolates showed highly variable experimental virulence, with a trend toward lower virulence for H30R1 isolates [13,14]. Other studies of experimental virulence and ST131 likewise have yielded inconsistent results, not only across different animal models (mice, zebrafish, C. elegans, and G. mellonella) but even within a given model. For example, although initial results using the murine sepsis model suggested that ST131 was highly virulent [27], subsequent studies showed marked virulence variability [13,14,28]. Our results confirm this overall variability, notwithstanding a relatively high average virulence level. Additionally, with our comparatively large strain set, we were able to document in an univariable analysis significantly lower virulence for H30R1 isolates and, among isolates from Spain, higher virulence for H30Rx isolates.
Notably, certain individual VGs (pap, kpsMII, and K2/ K100) were associated with both virulence and the H30Rx subset (vs. the H30R1 subset), especially among isolates from Spain. This is consistent withand may partially explainthe greater observed virulence of Spanish H30Rx isolates, especially because virulence associations were stronger for VGs than for clonal subsets. Indeed, previous studies support a possible direct virulence contribution from the K2 capsule in non-ST131 clonal backgrounds [12,29,30]. These findings suggest that VG content and/or combinations of VGs (i.e., ExPEC molecular definition and VG score) could predict, and may determine, experimental virulence.
By contrast with the univariable analysis, the results of the multivariable analysis showed molecular ExPEC status as the strongest and only consistently significant predictor of experimental virulence, followed distantly by year of testing (which is a surrogate for the country of origin) and K2/K100. This finding may explain the initial observation of differences in experimental virulence across clonal subsets, which also differ for their molecularly defined ExPEC fraction (i.e., higher in H30Rx and lower in H30R1).
With the multivariable adjustment, whereas the individual VGs that were univariable predictors of experimental virulence lost statistical significance, ExPEC status remained a significant predictor. This may be explained by ExPEC status accounting for the influence of the individual VGs on virulence outcomes because the molecular definition of ExPEC includes those genes [14]. The very wide confidence interval around the OR for ExPEC suggests a need for studies involving more isolates, ideally of different countries of origin to test geographical impact.
By contrast with molecularly defined ExPEC status, neither virotype nor aggregate VG profiles significantly predicted virulence, which conflicts with previous findings [8]. Such inconsistencies across studies indicate that the described ST131 virotypes or even more extensive VGs profiles (as shown here), although associated with clonal background, [12,31,32] are insufficient to reliably predict experimental virulence. Although some of our negative findings may reflect in part the small numbers per subgroup after stratification by virotype, extended VG profiles, year of testing, and/or country of origin, even our overall analysis failed to replicate certain associations noted in previous studies, despite our greater total number of isolates.
Our study has some limitations. First, the murine sepsis model mimics only partially the pathogenesis of sepsis in humans, despite being standard in the field; outbred mice may vary in their response to infection, possibly contributing to experimental variability, although also presumably improving generalizability; and the use of only female mice conceivably could bias the results.
Second, our isolates were tested at different times, although temporal effects were addressed analytically by stratification by year of testing, and seem unlikely given the temporal stability of results for the control strains and for clonal subsets other than H30Rx. Third, our virulence genotyping relied on DNA detection for a limited set of VGs. Conceivably, expression/regulation of these or other (unaddressed) VGs linked to these may underlie the observed associations between VG presence and experimental virulence, and/or account for the residual unexplained virulence variation.
Fourth, the molecular definitions used for ExPEC and UPEC refer to the presence/absence of specific VGs, so do not track reliably with the source of isolation. However, they predict biological ExPEC status, defined as a strain's intrinsic ability to cause extraintestinal infection, more accurately than does clinical source or presentation. Fifth, due to VG genotype variability within E. coli, strains that qualify as molecular UPEC do not necessarily qualify as ExPEC and vice versa, regardless of source; here, UPEC status was too prevalent for valid statistical analysis. Sixth, for some comparisons, statistical power was reduced by stratification by country and year of testing, and by the low prevalence of certain variables.
The study has also notable strengths. These include the large number of strains tested (making it, to our knowledge, the largest study to date of experimental virulence of ST131 strains in any animal model); the use of an established sepsis model that among those available most closely mimics human disease; attention to an extensive range of bacterial traits, including single and combined VGs and different ST131 subsets; and use of multiple statistical approaches, including multivariable modeling.
In conclusion, we found considerable variability in experimental virulence between and within the different ST131 clonal subsets, which differed significantly for VG content, ExPEC status, and virotype. With the multivariable adjustment, ExPEC status was the only consistently significant outcome predictor. Thus, composite markers such as ExPEC status are useful for identifying potentially virulent ST131 strains. These findings may help in devising screening tests and identifying targets for therapeutic or preventive interventions against infections caused by ST131 strains. They also indicate a need to study further the virulence determinants of ST131 (including possibly with in vitro assays such as serum resistance or survival in phagocytes), and to identify an explanation other than sepsis-causing ability for ST131's impressive epidemiologic success. | 4,562.6 | 2020-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
A numerical and experimental study of coaxial jets
An algebraic stress model and the standard k-e{lunate} model is applied to predict the mean and turbulence quantities for axisymmetric, nonswirling coaxial jets without confinement. To investigate discretization schemes, namely, hybrid, power-law, and the flux-spline, are employed. In addition, an experimental study is conducted to provide data of good quality, especially near the inlet, for model assessment. The results show that the use of the algebraic stress model leads to better agreement between the numerical results and experimental data. © 1989.
Introduction
In recent years, considerable research has been directed towards the evaluation of various turbulence models for complex flows (e.g., Refs.1-3).However, a definitive evaluation has been hampered by the presence of excessive numerical (false) diffusion in the computed solutions and the lack of benchmark quality experimental data.Many of the prior computations have been performed using the hybrid (central/upwind) differencing scheme for the convective terms in the transport equations.Such a practice leads to excessive numerical diffusion which may be comparable to the physical diffusion.Further, the hybrid scheme responds very slowly to grid refinement, and an extremely large number of grid points may be required to obtain a grid-independent solution.A solution to the false diffusion problem is the use of higher order discrctization schemes for the convective terms.These schemes have been shown to be more accurate than the hybrid scheme for the same number of grid points and hence have the potential of providing a grid-independent solution without requiring an excessively large number of grid points.Examples of these schemes are QUICK (Ref.4), the skew upwind differencing sceheme (Ref.5), and the flux-splint scheme (Refs.6, 7).
In addition to numerical accuracy, another important consideration in the assessment of a turbulence model is the availability of reliable experimental data.The lack of "correct" boundary conditions may result in predictions which would not compare favorably with the experimental data and may cause erroneous inferences to be drawn about the turbulence model.The errors associated with the discrctization scheme, experimental uncertainty, and the turbulence model occur simultaneously and cannot be separated.It is, therefore, imperative that the errors from the first two sources be minimized so that the discrepancies between the experimental and computed results can definitely be attributed to the failure of the turbulence model.
In this paper, computations for unconfined axisymmetric nonswirling coaxial jets are reported.To assess the effect of false diffusion, solutions have been obtained using various differencing schemes on a fine grid.The schemes used are hybrid, power-law differencing scheme (Ref.8), and the fluxspine scheme (Refs.6,7).Both the standard k-e model (Ref.9) and the algebraic stress model (Ref.10) were applied in this study.These computations have been compared with detailed experimental data obtained using a two-component phase-Doppler technique.Prior studies which deal with some of the aforementioned issues related to the evaluation of turbulence models include the work of Leschziner and Rodi (Refs. 11,12), Hackman, Raithby, and Strong (Ref.13), and Claus (Ref. 14).
In the next section experimental procedure is described.This is followed by the mathematical model, which includes the turbulence models, solution algorithm, discretization schemes, boundary conditions, and other computational details.In the last section, the numerical results are compared with the experimental data, and the turbulence models are evaluated.
Test facility
A testing facility was designed to characterize a wide variety of flows under isothermal conditions (Figure l(a)).For the present study, an unconfined flow configuration was selected, operating with an axial jet injector surrounded by a nonswifling annular jet, as shown in Figure l(b).In this configuration, the injector was directed vertically downward within a 457-mm 2 wire mesh screen.The entire test assembly is surrounded by a flexible plastic enclosure which serves two purposes.First, the enclosure helps damp out extraneous room drafts.Second, and more important, the enclosure allows uniform seeding of the entrained air, thereby permitting unbiased measurements in the jet outer region.The test section air flows into a sealed collection drum and then into a suction vent connected to an exhaust blower.A slide valve on the vent allows for a variable duct back pressure.The support cage is mounted on an optics table from below via a two-axis traverse system in order to allow
Velocity measurement
A two-color, two-component laser anemometer system was used to measure the velocity components, At each spatial point the laser simultaneously measured two orthogonal components of velocity.In order to get all three components, two scans were taken.One was used to measure U, V, ~, v ~, uv components and the other one was able to measure U, W, ~, w ~I, ~-ff components.Thus all three components were measured, with the U velocity and its fluctuation measured twice.
Error estimates
The measurement errors can be broken down into four categories: (1) errors associated with the instrumentation and hardware, (2) uncertainty due to finite number of samples taken at each point, (3) repeatability limitations, and (4) validity of axisymmetric-flow assumption.
Category 1.The instrument accuracy is associated with the fringe spacing in the sampling volume.The error in this value is + 0.1 microns with the optical setup used.This translates into a _+ 1% error in the measurement of mean velocity.The fringes were orthogonal to within 1 °.
Category 2. At each point, 35,000 samples were taken, giving a maximum error of 0.04m/s (to a 95% confidence) in the regions with the highest fluctuating velocities (1.3 m/s).
Category 3.During the course of one measurement sequence (i.e., a single hardware setup and alignment), the mean and fluctuating velocities repeat to within 2% or 3% at a given point.The shear stress values generally repeat to within 5%.
Category 4. The results of mean and rms axial velocities measured along two orthogonal profiles show that the differences are within 2%.
Data sets
The flow conditions used for the case considered in this paper are given in Table 1.The center jet (diameter, D = 24.1 mm) is surrounded by an annular jet with the inner and outer diameters of 29 mm and 36.7 mm, respectively.The effective area ratio and axial velocity ratio of the annular jet to the center jet are 0.87 and 1.8, respectively.The experiment is documented in Ref. 15 and are tabulated following the format outlined in Ref. 16.
Mean flow equations
In this section, the equations which govern the distribution of the mean quantities are summarized.These equations are derived from the conservation laws of mass and momentum using time averaging and are expressed in tensor notation for steady and constant density flow as: where the bar is used to denote time-averaged quantities.
As a consequence of the nonlinearity of Equation 2, the averaging process used introduces unknown correlations which are modeled through a "turbulence model."
Turbulence models
In this paper, two turbulence closure models are considered, namely the standard k-5 model and an algebraic stress model.In the k-5 model, the turbulent fluxes are related to the mean fields through the assumption of an isotropic eddy viscosity and a turbulent Prandtl number as:
\OX~ OX,/
The eddy viscosity (#,) is obtained from the turbulent kinetic energy (k) and its dissipation rate (5) using the relation:
(p ,uA -gy,,-o5
(5) oxj ) The constants used in this model have been taken from Ref. 9 and are given in Table 2.
The k-5 model has been used with success in the calculation of various free shear flows and recirculating flows with and without swirl (e.g., Ref. 17), However, in flows with significant streamline curvature, the isotropic eddy viscosity assumption may not be able to describe the turbulent diffusion effects adequately.
The second turbulence model considered in this study is an algebraic stress model (ASM).The algebraic stress model is a special case of the Reynolds stress transport equation which The order-of-magnitude argument is performed on an equation for (2a~fl), which is obtained by subtracting the product of ~u/3 and the transport equation for 2k from the transport equation for u~j.The resulting equation becomes where the differential operator .~ is used to denote the combined convective and diffusive transport operators.Terms in Equation 8 are evaluated in powers of a and terms of the order of a 2 and higher are neglected.The result of this simplification is: where Pu is the production of the Reynolds stresses, ~b u is the pressure redistribution term, and Pk is the production of the kinetic energy.In the algebraic stress closure, a model for the pressure-strain term (~bq) is required.Here, the model of Launder et al. (Ref.19), which includes both the symmetric and antisymmetric mean-strain effects on redistribution modeling, is selected.These quantities are obtained from the following equations: OXk Oxd where In the above equations, cl, ~t, fl, and ?are model constants.
According to Launder et al. (Ref. 19), ct, fl, and ?can be related to a single constant C 2. Therefore, only two model constants are introduced in Equation 12 rather than four as shown.The constants used in the present study are listed in Table 3.
Solution algorithm
The computations for coaxial jets can be made using a parabolic marching procedure if the radial pressure gradients are small.Such a situation occurs if velocities in the two streams are comparable or the inner stream is faster and if the swirl is weak.However, if the swirl is strong and/or the outer stream is significantly faster, the radial pressure gradients become significant and a region of reverse flow develops.The ultimate goal is to extend this study to swirling flow analysis.Therefore, a calculation procedure based on elliptic flows was used.
The discretization equations are obtained using a controlvolume approach (Ref.8).The details of the differencing schemes for the convection and diffusion terms are given in the next section.The coupling between the continuity and momentum equations is handled using the SIMPLER algorithm (Ref.8).The algebraic equations are solved using a line-by-line tridiagonal matrix algorithm (TDMA).
Discretization schemes
Numerical solutions have been obtained using three schemes for the convection and diffusion terms in the transport equations.These schemes are briefly described below.
Hybrid scheme.In this scheme (e.g., Ref. 8) both the convection and diffusion terms are approximated using the central differencing scheme if the mesh Peeler number (Pc) is less than two.Outside this range, the upwind scheme is used for the convective terms and physical diffusion is neglected.
Power-law scheme.This scheme (Ref.8) is based on a curve-fit to the exact solution of the one-dimensional convectiondiffusion equation without source terms.This scheme becomes identical to the hybrid scheme for Pc> 10.
Flux-spline scheme.The hybrid and power-law differencing schemes can be considered as approximations to the exponential scheme (Ref.20) which results from the exact solution to the one-dimensional convection-diffusion equation without a source.In the derivation of these schemes, the total flux (convection + diffusion) is assumed to be uniform between two grid points.These schemes work well only in problems in which either the flow is closely aligned with the grid lines or there are no strong cross-flow gradients.If such idealized conditions are not encountered, the locally one-dimensional assumption used in these schemes gives rise to numerical (false) diffusion.
In the flux-spline scheme (Refs.6, 7), the total flux is assumed to vary in a piecewise linear manner within a control volume.This assumption leads to a scheme in which the discretization coefficients are identical to those from the exponential scheme but there is an additional source term which involves the differences in fluxes at adjacent faces of a control volume.The presence of this source term enables the flux-spline scheme to respond to the presence of sources and/or multidimensionality of the flow.
Boundary conditions
A calculation procedure for elliptic flow requires boundary conditions on all boundaries of the computational domain.Four kinds of boundaries need consideration, namely, inlet, axis of symmetry, outlet, and the entrainment boundary.At the inlet boundary, which was located at the first measurement plane, the measured profiles of U and V were prescribed.The k-profile was obtained from the measured Reynolds stresses.These profiles are shown in Figure 2.This kinetic energy distribution and the measured shear stress profile were used to derive the e values at the inlet plane through the following relationship: At the axis of symmetry, the radial velocity and the radial gradients of other variables are set to zero.At the outlet, axial diffusuion is neglected for all variables.Along the entrainment boundary, which was placed sufficiently far from the axis of symmetry, the quantity (rV) was assumed constant.In addition, the axial velocity U was assumed zero and k and ~ were assigned arbitrarily low values yielding an eddy viscosity,/~t = 10/~.
Computational details
The computational mesh used for all calculations consisted of 76 x 69 nonuniformly distributed grid points in the axial and radial directions.A finer grid spacing was used near the inlet, centerline, and in the shear layer.The computational domain extended from the first measurement plane, located downstream of the nozzle exit at a distance of 2.0 mm to 40 inner jet diameters downstream of the nozzle exit.In the radial direction, the entrainment boundary was placed at a distance of six jet diameters from the axis of symmetry.
The convergence criterion used to terminate the iterations was that the absolute sums of the mass and momentum residuals at all internal grid points, normalized by inlet mass and momentum fluxes, be less than 5 x 10-3.
Results and discussion
In this section, the results for nonswirling coaxial jets are presented.The numerical results were obtained using two turbulence models with various differencing schemes.The calculated mean and turbulence quantities are also compared with the measurement at selected stations.
The k-8 turbulence mode/
The effect of different discretization schemes is shown by comparing the predicted axial velocity profiles at three streamwise locations, namely, x=15, 35, and 75mm.The velocity profiles are presented in Figure 3.It is noted that, except for some minor differences, all three schemes for the convective terms yield nearly identical results.In earlier studies (Refs.6, 7), it was shown that in the regions of high Peclet number the fiux-spline results are more accurate than those from the power-law scheme• The fact that for the present situation there are no significant differences between results from these schemes indicates that the results are grid-independent.The differences between the hybrid and the power-law schemes are attributed to the different treatments of the diffusion terms.The computed results at the selected axial stations compare reasonably well with the experimental data.The computations consistently show sharper gradients than the experiment at the points of the maximum and minimum velocity.Figure 4 shows the kinetic energy profiles at three axial locations• The experimental kinetic energy profiles were derived from the measured Reynolds stresses.In these figures, results from the power-law scheme and flux-spline scheme have been shown.Again, the two sets of computations are in close agreement with each other.Most of the differences are seen in the regions of steep gradients where the flux-spline results are expected to be more accurate.The agreement between the predicted and experimental values of kinetic energy is not as good as that for the axial velocity.Even though the trends are similar, the predicted kinetic energy levels are smaller than those derived from the measurements.
Since the present calculations are essentially free of numerical diffusion, the discrepancies between the experimental data and the predictions can be attributed to two sources--improper boundary conditions at the inlet plane and the deficiencies of the turbulence model.As regards the inlet conditions, all quantities except the dissipation rate were prescribed from the experiment.The e values, however, were derived from the measured shear stresses and the mean velocity gradients.The uncertainties in the measurements and in the evaluation of the velocity gradients may lead to errors in the e values which would adversely affect the calculations at downstream locations.
Numerical experiments indicate that the inlet e profile is the single most important factor in predicting the maximum values of mean and turbulence quantities, provided a reasonable inlet kinetic energy distribution is available.To study the sensitivity to inlet e profile, calculations were also made using an alternative distribution, which were derived from the turbulence kinetic energy and an assigned length scale distribution (3% of the radius).The inlet ~ profiles for both cases are shown in Figure 5.The major differences between these two conditions are near the centerline region; however, the peak values are about the same.The predicted results of mean axial velocity and turbulent kinetic energy at an axial location of 15 mm are shown in Figure 6.The results show that the turbulent kinetic energy has decreased due to excessive inlet dissipation rate in the inner region.On the other hand, the mean velocity is not affected significantly.This can be attributed to the fact that the maximum value of the inlet e in the annular region has not been changed considerably.
The algebraic stress model
In this section, the predictions using the ASM have been compared with those from the k-e model.Similar to the trends observed in the k-~ model calculations, the effect of various discretization schemes on the ASM results was found to be rather insignificant.Consequently, results from different schemes will be shown only for some cases.
The predicted mean axial velocity profiles from the ASM and k-e models have been compared in Figure 7.These results were obtained using the flux-spline discretization scheme.The use of ASM improves the overall agreement between the predictions and the experimental data.The major differences between the two turbulence models are seen in the regions where a maxima or a minima occurs in the velocity profiles.
The predicted turbulent shear stress from the ASM has been compared with the experimental data in Figure 8.Here results from the power-law differencing scheme have also been included to assess the numerical accuracy of the results.Both discretization schemes give nearly identical results, indicating that the solution is grid-independent.The positive peak in the shear stress profile corresponds to the shear layer between the two streams and the negative peak corresponds to the shear layer associated with the expansion.The agreement between the calculation and the experimental data is good, although the peak values are not well predicted.
The normal stresses at different axial locations are shown in Figure 9. Again, results from both the power-law and flux- for ~, underpredicted for ~, and closely predicted for w -'z.This clearly indicates the lack of performance of the pressure-strain model.One reason for this could be that the constant C2 used is not suitable for complex turbulent flows.Since C2 is determined from simple turbulent flows in local equilibrium, it would be more appropriate for equilibrium ASM than for nonequilibrium ASM.Another reason could be the incorrect modeling of the mean strain part of the pressure-strain term.It -
Concluding remarks
Based upon the preceding discussion, the main conclusions can be summarized as follows: (1) The mean flowfield predicted using the k-8 model agrees reasonably well with the experimental data.However, the o Lo. (3) The shear stress predictions using the algebraic stress model are in good agreement with the experimental data.For normal stresses, there are considerable discrepancies between the experimental and numerical results.(4) The discrepancies between the data and the algebraic stress model solution may be related to the pressure-strain correlation model.(5) The effectiveness of the turbulence models, to some extent, can be obscured by boundary conditions.Inlet conditions for e, especially the peak values, are found to be an important factor in determining the maximum velocity values.
Figure I
Figure I Experimental setupthe two degrees of freedom in the horizontal plane.The injector is mounted on a vertical spar to provide the third degree of freedom in the vertical direction.The control volume spatial location is monitored via a three-axis digital indicator which permits positioning to within 0.01 mm.Data were obtained at seven axial stations: 15, 25, 35, 50, 75, 150, and 300mm from the exit plane of the injector.At each axial station, between 10 and 20 radial points were scanned as determined by the desired level of profile resolution.
( 4 )
Two additional partial differential equations are solved to obtain k and 5.These are Figure 2
Figure 3
Figure 3 Comparison of measurements with mean axial velocity calculations
Figure 4
Figure 4 Comparison of measurements with turbulent kinetic energy calculations
uFigure 6 Figure 7
Figure 6 Profiles of mean axial velocity and turbulent kinetic energy using different inlet turbulent dissipation rate Coaxial jets: M. Nikjooy et al.
U V m2/s 2 Figure8
Figure8 Comparison of the algebraic stress model (ASM) prediction of turbulent shear stress with measurements
7 Figure 10 Figure I 1
Figure 10 Comparison of the algebraic stress model prediction of radial normal stress with measurements
Table 2
Values of constants in the k-e model Coaxial jets: 1t4.Nikjooy et al.relates the individual stresses to mean velocity gradient, turbulent kinetic energy, and its dissipation rate by way of algebraic expressions.Algebraic stress models can be classified into two categories.The first is based on a local equilibrium assumption for the turbulence field, whereby the turbulence transport terms are neglected compared to the local production and dissipation of turbulence.A second class of ASM is based on the local nonequilibrium assumption.Approaches of this kind, where the convection and diffusion transport of turbulent stresses are approximated, have been developed byMellor and Yamada (Ref.10)andRodi(Ref.18).Following the recommendation in Ref. 3, the model proposed byMellor and Yamada (Ref.10)hasbeenadopted in this study.In the Mellor and Yamada model (Ref.10), the Reynolds transport equations are simplified through an order-ofmagnitude argument based on a2=O(a~), where % is the nondimensional measure of anisotropy and is given by | 4,921 | 1989-09-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
A Low-Cost Inertial Measurement Unit Motion Capture System for Operation Posture Collection and Recognition
In factories, human posture recognition facilitates human–machine collaboration, human risk management, and workflow improvement. Compared to optical sensors, inertial sensors have the advantages of portability and resistance to obstruction, making them suitable for factories. However, existing product-level inertial sensing solutions are generally expensive. This paper proposes a low-cost human motion capture system based on BMI 160, a type of six-axis inertial measurement unit (IMU). Based on WIFI communication, the collected data are processed to obtain the displacement of human joints’ rotation angles around XYZ directions and the displacement in XYZ directions, then the human skeleton hierarchical relationship was combined to calculate the real-time human posture. Furthermore, the digital human model was been established on Unity3D to synchronously visualize and present human movements. We simulated assembly operations in a virtual reality environment for human posture data collection and posture recognition experiments. Six inertial sensors were placed on the chest, waist, knee joints, and ankle joints of both legs. There were 16,067 labeled samples obtained for posture recognition model training, and the accumulated displacement and the rotation angle of six joints in the three directions were used as input features. The bi-directional long short-term memory (BiLSTM) model was used to identify seven common operation postures: standing, slightly bending, deep bending, half-squatting, squatting, sitting, and supine, with an average accuracy of 98.24%. According to the experiment result, the proposed method could be used to develop a low-cost and effective solution to human posture recognition for factory operation.
Introduction
In the era of Industry 4.0, motion capture systems will find broader applications in engineering for digital human modeling [1].In the factory, the recognition of human body movement contributes to human-machine collaboration [2] and human factor analysis [3].In contrast to optical cameras, inertial sensors are more flexible and resistant to obstruction, making them suitable for scenarios such as automotive assembly [4,5].Researchers have established methods for capturing full-body motion by sparse inertial sensors.Susperregi et al. [6] proposed the fusion of multiple low-cost sensors and cameras to capture human behavior, addressing data bias through data fusion.Caputo et al. [4] utilized a motion capture system to estimate the basic segment positions of the human body.He et al. [7] introduced a wavelet tensor fuzzy clustering scheme for analyzing multisensor signals to capture human behavior, achieving higher recognition accuracy compared to the fuzzy mean clustering method.Liu et al. [8] developed a segmentation procedure based on a moving average window algorithm, and introduced a double-threshold technique for automatic recognition and segmentation of calibration postures.Yi et al. [9] tracked human motion using only six inertial sensors, combining a neural kinematic estimator and a physical perception motion optimizer.Previous work has provided good guidance for achieving low-cost inertial sensor dynamic capture, giving IMUs potential application prospects in the engineering field.
Due to the inevitable presence of a large number of metal objects in the factory environment, the negative impact on magnetometers needs to be considered.Therefore, we have chosen the cost-effective six-axis sensor chip BMI160, along with the ESP8266-NodeMCU chip, IP5306 BMS charging board, and li-ion battery, to collectively form our tracker; the total cost is USD 3.60.A comparison of prices and performance of other IMU solutions is presented in Table 1, covering aspects such as price, sampling rate, accelerometer rate noise spectral density, gyroscope rate noise spectral density, interface mode, and battery life.The IMU solutions commonly used in human motion recognition research, such as Xsens MTw Awinda [10], MetaMotionR [10], Next-Generation IMU [11,12], MetaMotionC [13], Shimmer3 [14], and InvenSense MPU-9250 [15], were chosen.Through comparison, our solution demonstrates advantages in terms of pricing.Moreover, for the collection of human body movements, the sampling rate and accuracy within our solution fall within acceptable ranges, and the interface and the battery life employed in our solution are also sufficient for human motion capture.Machine learning is commonly used in human motion recognition research, for example, the support vector machines classification model [16], Markov model [17], and random forest (RF) [18].In the past few years, deep learning algorithms have found extensive applications in the realm of human motion recognition [19], demonstrating superior recognition performance compared to traditional algorithms [20,21].Akkaladevi et al. [22] proposed a multilabel human action recognition framework using a spatiotemporal graph convolutional network (ST-GCN) to capture spatial and temporal relationships between joint sequences.Tang et al. [23] introduced a novel dual-branch interactive network (DIN) that incorporates the strengths of both CNNs and transformers for managing multichannel time series.Wang et al. [24] explored adaptive networks that can dynamically adjust their structure based on available computing resources, allowing for a trade-off between accuracy and speed.Dey et al. [25] utilized a three-layer stacked temporal convolutional network to predict foot angular positions.Oh et al. [26] employed a pattern recognition method based on an artificial neural network algorithm to detect different gait states.Seenath et al. [27] proposed the conformer-based human activity recognition model, which leverages attention mechanisms to better capture the temporal dynamics of human motion and improve recognition accuracy.Considering that IMU motion capture data contain both temporal and spatial information, Chen et al. [28] used a deep convolutional neural network with a bidirectional long short-term memory network (DCNN-BiLSTM) to recognize and estimate four swimming styles.Based on deep learning algorithms, the accuracy of human motion recognition can reach around 90%.Based on existing research, we will carry out IMU-based human operation posture recognition.
Existing product-level inertial motion capture devices generally require high purchasing costs.This paper aims to explore a low-cost operation motion capture system and an operation posture recognition solution based on IMU.The cost-effective core components are used to build the human motion capture system, and experimental tests are conducted in virtual factory environments.A deep neural network model is used to recognize multiple basic operation postures offline based on experimental datasets.This paper is organized as follows: Section 1 offers an overview of the research status and significance of IMU-based human motion capture and operation posture recognition.Section 2 introduces the low-cost assembly operation motion capture scheme based on IMU from the aspects of hardware configuration, motion signal processing, and human motion reproduction.Section 3 describes the operation motion capture experiment and the operation posture recognition method based on the BiLSTM model.Section 4 discusses the proposed research methods and suggests future research directions; Section 5 summarizes the proposed research work.
Overall Solution
This paper proposes a low-cost human motion capture system based on IMU.As shown in Figure 1, the system consists of four main components: a firmware module, a hardware module, a signal processing module, and a synchronized visualization module.In the hardware part, the core modules include the inertial measurement, communication, and charging modules.The inertial measurement module utilizes BMI160.The communication module uses the ESP8266 chip for wireless communication via WiFi.The charging module consists of a charging integration board, a battery, and a switch.The BMI160 is driven by the CH341SER.The firmware code is compiled and run in PlatformIO IDE (VSCode).The tracker signals are transmitted to the host computer via WiFi, where the collected data are processed to obtain the pose information of the sensors.The trackers are assigned to the corresponding joint positions of the digital human body based on their actual wearing positions.Combined with the hierarchical relationship of the human skeleton, the real-time calculation of human posture is performed.Finally, using the Open Sound Control (OSC) network transmission protocol, the system synchronously visualizes human motion through a 3D digital human model in Unity3D.
Hardware
The main functional components of the action tracker are the BMI160 IMU module, the ESP8266-NodeMCU module, and the IP5306 BMS charging module.Considering usability and price, the BMI160 was chosen to implement the inertial measurement function.The BMI160 chip module includes a three-axis accelerometer and a three-axis gyroscope.The chip features three 16-bit analog-to-digital converters (ADCs) for digitizing the accelerometer outputs and three 16-bit ADCs for digitizing the gyroscope outputs with standard IIC (up to 1 MHz)/SPI communication protocol.The chip can monitor an acceleration range of ±4 g and an angular velocity range of ±250°/s.The sampling rate is 100 Hz.In coordination, the ESP8266 Node MCU module, which is a version containing the ESP-12F WIFI unit with a peak power consumption of approximately 1.5 W, was selected for communication, supporting WIFI connections in the 2.4 G frequency band.Additionally, the charging module was designed using the TP4056 Type-C charging chip, an input voltage of 5 V, and a maximum charging current of 1000 mA.The 3.7 V, 1500 mAh lithium battery was chosen.Finally, two-position toggle switches were selected to control the tracker's on and off functions.The circuit diagram and physical diagram of the tracker are shown in Figure 2a.The wires were soldered in a tightly arranged manner to minimize the size of the tracker.The tracker's housing was 3D printed, with a total length of 54 mm, a total width of 39 mm, and a total height of 29 mm.The strap width is 25 mm, as shown in Figure 2b.In this paper, six motion trackers are used, strapped respectively to the chest, and waist, above the left knee joint, above the left ankle joint, above the right knee joint, and above the right ankle joint of the human.From top to bottom, these trackers represent the movements of the chest, waist, knee end of the femur bone, and the ankle end of the tibia bone.The wearing positions and directions of IMU are shown in Figure 2c.When wearing, the direction of the BMI160 inside each tracker is consistent, with the Y-axis pointing towards the ground and the Z-axis pointing towards the front of the body.The length values of each segment of the experimenter's body have been pre-inputted into the terminal, and the movement status of the trunk and lower limbs can be obtained by providing joint displacement and angle.Before the motion capture experiment, the experimenter needs to make two designated postures: upright posture and skiing posture, to calibrate the initial direction of each tracker.
Signal Processing
The processing of motion signals involves two main parts: filtering and drift compensation of IMU signals.Kalman filtering algorithm is used for filtering.Human motion is irregular but within a certain activity space.The Kalman filtering algorithm is a classic method for processing IMU signals, which consists of predicting the position of the next time step, and correcting the position of the current state.The specific implementation principle is as follows. Xk Above are prediction equations.In Equation ( 1), Xk represents the prior state estimation at time k, and Xk−1 represents the posterior state at time k−1, respectively.A k is a transformation matrix that represents the proportion of the previous state's correction to the current state result.B k represents the control variable matrix, and u k is the state control vector.In Equation ( 2), P k represents the prior estimate covariance at time k, and P k−1 represents the posterior estimate covariance at time k − 1, Q is the covariance of the system process noise.
Equation ( 3) calculates the Kalman gain (K k ), in which H k represents the prediction matrix and R is the covariance matrix of the measurement noise.Equation ( 4) uses two predicted values and a ratio to calculate the output Xk , the posterior state estimation at time k.Z k is a measurement vector.Equation ( 5) prepares the posteriori estimation covariance at time k(P k ) for the prediction of the next time step.
The drift compensation part mainly involves applying inverse rotation to compensate drift of the IMU.In this study, signal processing and fusion are based on the Slime VR open-source software, a recently matured open-source motion capture solution based on IMUs.Based on our experimental environment and equipment, after multiple tuning and testing sessions primarily focusing on the accuracy and stability of reproducing human motion, we finally set the filtering strength to 50% and drift compensation strength to 20%.The original signals collected by IMUs consist of XYZ tri-axis acceleration signals and XYZ tri-axis gyroscope signals.The displacement information can be obtained by integrating the acceleration signal, while the rotation angle information can be obtained by integrating the gyroscope signal.
The calculation method for obtaining the current pose from two frames of IMU data is as follows.For the acceleration data, calculate the average acceleration between the current time t and the next time t + 1.This average acceleration over the time interval is used to approximate the velocity and displacement at t + 1, given the initial velocity and displacement at t. Since the IMU acceleration data is represented in the body coordinate system, it needs to be transformed to the world coordinate system using the corresponding pose.Before the transformation, the bias needs to be subtracted, and after the transformation, the gravitational acceleration needs to be subtracted.For the gyroscope data, the average angular velocity over the time interval is calculated between t and the next time t + 1.With this average angular velocity and the current pose, the pose at t + 1 can be approximated.Equations ( 6)- (12) show the entire integration process.
where a t,w is the acceleration of IMU at time t in the world coordinate system, Q t is the quaternion of IMU at time t, a t,b is the acceleration at time t in the body coordinate system, B a is the deviation of the body coordinate system, and g is the gravitational acceleration.
In Equations ( 7) and ( 8), ωt is the average angular velocity, ω t is the angular velocity at time t, ω t+1 is the angular velocity at time t + 1, and B g is the gyroscope bias, Q t+1 is IMU Quaternion at time t + 1.
āt,w = 1 2 (a t,w + a t+1,w ) In Equations ( 9) and ( 10), a t+1,w is the acceleration in the world coordinate system at time t + 1, a t+1,b is the acceleration in the body coordinate system at time t + 1, and āt,w is the average acceleration.
In Equations ( 11) and ( 12), V t is the velocity at time t, V t+1 is the velocity at time t, D t is the displacement of the IMU at time t, and D t+1 is the displacement of the IMU at time t + 1.
Online Synchronized Display of Human Body Motion
The online synchronized display of human body movements is achieved based on the tracker's pose information and the hierarchical relationship of the human body skeleton.This study uses a simplified digital human model to focus on the operational movements of the human torso and lower limbs.A 3D digital human model was built on the Unity platform.The joint composition of the digital human includes thoracic joints, lumbar joints, left and right hip joints, left and right knee joints, and left and right ankle joints.In constructing the digital human model, the thoracic, lumbar, and hip joints comprise three independent subjoints capable of generating rotation, pitch, and yaw movements.The ankle joint is generally considered a ball joint with two independent axes of rotation, while the knee joint has only one axis of rotation.The head and upper limb segments are set to default states.Figure 3 shows the skeletal and digitized human models with skinning.The lengths of the body segments are set according to the experimenter's height (1580 mm) and standard body proportions.
Unity and the IMU host can communicate through the OSC protocol to achieve an online synchronized display of human body movements.Figure 4 shows the real-time human body movement at a certain moment and the corresponding movements of the digital human model at the same moment.
Basic Operation Postures
By observing the assembly and maintenance operation processes of large-scale equipment, several common assembly basic postures that facilitate exerting force could be summarized: standing posture, slightly bending posture, deep bending posture, half squatting posture, squatting posture, sitting posture, and supine posture.Operators could perform upper limb actions based on these basic postures, such as pushing (pulling), tightening (loosening), gripping, tapping, etc.The labels, names, and reference images of the basic working postures are shown in Table 2.The definition of postures mainly considered the range of bending angles of the torso, the range of bending angles of the hip joint, and the range of bending angles of the knee joint.Labels have been defined for these basic postures.
Operation Posture Collection Experiment
As shown in Figure 5, an immersive assembly scene was set up to facilitate participants making corresponding assembly movements based on prompts using Tecnomatix software and HTC VIVE devices.The participant wearing the tracker completed the operation tasks under instructions.The router was not connected to other devices to obtain sufficient bandwidth during the experiment, and the entire experiment process was recorded.At the same time, we tried to avoid other 2.4 G signals to prevent excessive data transmission delay caused by frequency congestion in the experimental environment.The experiment was conducted within a radius of 5 m from the router to ensure low data transmission delay.The average latency during the actual testing process was approximately 3 ms.With time, the BMI160 may experience drift, causing body parts to face the wrong direction after some time.Therefore, a calibration of the wearable device was required every 10 min.
The participants sequentially completed seven different types of work tasks under voice prompts.Each work task corresponds to a category of basic working postures.A rest period was scheduled between the fourth and fifth tasks for device reset.Table 3 displays the duration of each operation task.The experiment involved a participant with a mechanical engineering background familiar with assembly processes.The participant's height is 1580 mm, and weight is 55 kg.
Operation Posture Recognition Method
After signal processing, the experiment data were organized as the cumulative displacement of six joints and the joint angles of six joints over time.In preparation for posture recognition, removing the preparation and rest periods and labeling the remaining periods with corresponding posture labels is necessary.As shown in Figure 6, Taking the curve of chest joint angle over time as an example, the gray area in the graph represents the excluded periods.In the remaining periods, each color represents a category of working posture.BiLSTM is a deep learning model suitable for sequential data, and particularly effective for data with a temporal structure, such as time series.BiLSTM effectively captures contextual relationships and long-term dependencies in sequential data by combining forward and backward information.In recent years, the BiLSTM model has been commonly applied in research on IMU-based human posture recognition, demonstrating excellent recognition performance.Based on the experiment data, the BiLSTM model was used to recognize the seven basic operation postures: standing posture, slightly bending posture, deep bending posture, half-squatting posture, squatting posture, sitting posture, and supine posture.The operation posture recognition network structure is shown in Figure 7. Labeled experimental data were transformed into the dataset using a sliding window technique.The window length is 50 and the sliding size is 5; a total of 16,067 labeled samples were obtained for training.The input features included the displacement and rotation angles of six joints (chest, waist, left hip, right hip, left ankle, right ankle) in the XYZ direction, resulting in 36 features.The input layer of the network module is a 16,067 × 36 matrix.The input sequence is processed by two separate LSTM layers, each observing the sequence in both the forward and backward directions.The number of hidden neurons in each LSTM layer is 64.The input time-series data first pass through the forward layer.For each time step, the forward LSTM unit updates its internal state and produces an output.Similarly, the input sequence data also go through the backward layer.For each time step, the backward LSTM unit updates its internal state and produces an output.The outputs from both the forward and backward directions are merged.The merged representation is then passed to a fully connected layer.Finally, it is fed into an output layer for classification, using the softmax activation function to generate a probability distribution over the classes.The outputs of multiple neurons are mapped to the range of 0-1 to obtain the predicted probability distribution, which represents the probability of belonging to each category and enables posture prediction.This model was compiled using the cross-entropy loss function and adaptive moment estimation (Adam) optimizer.We divided the dataset into training and testing sets in a 4:1 ratio, the random state was set to 42.The epochs and batch size were set to 10 and 32.The initial learning rate was set as 0.001.L2 regularization with a dropout of 0.5 was selected to prevent overfitting of the model.
Operation Posture Recognition Result
The offline test was conducted on a workstation with an Intel Core i7-1165G7 CPU and NVIDIA GeForce MX 450 GPU.To reduce the random effects of the training tests, the sample order was randomly shuffled and the training test was repeated five times.The average training time was 75.08 s. Figure 8 shows the training and validation loss as the number of iterations increases.It can be observed that the loss curves of the training set and validation set tend to flatten after the 8th iteration.After the 10th generation, the test set loss remained stable below 0.05.
After the test, the average accuracy of posture prediction was 98.24%.The posture prediction transition time, including data preprocessing time and inference time, is 31 ms.Table 4 shows the accuracy, recall, and F-score of each posture prediction result.Each calculation result in the table is the average of 5 tests.
The results are summarized in Table 4, which shows that (1) the precision of the seven postures are all above 96%, with the highest precision for the bending posture at 99.74% and the lowest for the squatting posture at 96.80%; (2) the recall of the seven postures are all above 96%, with the highest recall for the bending posture at 99.45% and the lowest for the half-squatting posture at 96.57%; (3) the F-score of the seven postures are all above 97%, with the highest F-score for the bending posture at 99.56% and the lowest for the half-squatting posture at 97.42%.Overall, the recognition performance is best for the deep bending posture, while the recognition performance for the half-squatting and squatting postures is relatively poor.Figure 9 shows the distribution of the test set confusion matrix from five tests.From the confusion matrix, it can be visually observed that the model performs well in classifying most postures.In comparison, the standing posture and half-squatting posture are more prone to be misclassified as a slightly bending posture.The comparison of the results with existing research is shown in Table 5.In aspects of accuracy and time cost, we compared our work with other IMU-based human posture recognition works.The number of recognition classes and the number of IMUs are also shown in the table.In terms of accuracy, our work achieved a 98% accuracy for the classification of seven postures using six trackers, placing it in a relatively high position compared to similar studies.Regarding the time cost, we took into account the posture prediction transition time (including data preprocessing time and inference time), as well as the IMU sampling rate.While our method does not match the performance of the approach described in reference [28], we were able to identify a greater number of postures with more considerable accuracy.
Discussion
This paper focuses on operation posture collection and recognition based on low-cost IMU.The proposed method is available, and the accuracy of basic posture classification recognition is satisfactory.Integrating more features and employing more complex machine learning models may result in higher recognition accuracy, but it also comes with relatively higher time costs.Based on the data in this experiment, we compared the recognition accuracy of the LSTM and BiLSTM models.We conducted five training and testing times and calculated the average accuracy of classification on the test set.The average accuracy of LSTM is 95.81%, while the average accuracy of BiLSTM is 98.24%.Compared to LSTM, BiLSTM has a greater advantage in basic posture classification for assembly tasks.However, due to the complexity of the model, BiLSTM requires longer training time.In our test, LSTM took 34.82 s and BiLSTM took 75.08 s.However, the difference in prediction time between the two models is not significant: LSTM took 15ms and BiLSTM took 31 ms.
Using wireless network transmission of data can enhance convenience, but it may lead to sudden posture distortion when the network signal is unstable, as shown in Figure 10.The occurrence of abnormal postures is related to the network signal quality.In the experimental environment, the occurrence of anomalies is rare (1-2 times/10 min) and quickly recovers to normal.However, in environments with poorer signal quality, it can be foreseen that sudden abnormal postures will affect the observation of operation movements and the accuracy of posture recognition to some extent.How to identify and ignore exceptional signals is a research question that needs to be further studied.Because abnormal postures often manifest as sudden drifts in joint positions, a possible solution is to set a threshold for joint position changes and identify abnormal postures accordingly.Alternatively, based on gathering a sufficient number of data samples, machine learning models can be employed to differentiate between normal and abnormal states.We will attempt to address this issue in future work.
Conclusions
For the demand for operation posture recognition in the Industry 4.0 era, this paper explores a low-cost method for collecting assembly actions and recognizing assembly postures based on IMU.The study includes the following aspects: A low-cost human motion collection system based on IMU has been proposed.The BMI160 inertial measurement module is combined with the ESP8266 communication module to create the motion collection tracker.Motion signals are transmitted via WiFi to the computer to obtain sensor pose information.The tracker is assigned to the corresponding joint positions of the digital human body based on the actual wearing position.Real-time calculation of human posture is performed by combining the hierarchical relationship of the human body skeleton.The Unity development platform receives human motion information and presents synchronized online visualization through a 3D digital human model; We experimentally validate the feasibility of the action collection scheme.We have simulated various assembly tasks in a virtual reality environment and collected motion information for six joints of the subjects: chest, waist, left knee, left ankle, right knee, and right ankle, which included the rotation angles around XYZ directions and the displacement in XYZ directions.The BiLSTM model was used to identify seven common assembly postures: standing posture, slightly bending posture, deep bending posture, half-squatting posture, squatting posture, sitting posture, and supine posture.The model performs well in classifying these operation postures.
Based on the experiment results, the system could serve as a low-cost solution for the basic operation posture recognition of operation tasks.Subsequent research will focus on enhancing the operation posture recognition system and testing it in real factory environments.
Figure 1 .
Figure 1.The structure of the low-cost motion capture system based on IMU.
Figure 2 .
Figure 2. The circuit diagram and physical diagram of the tracker.(a) The status of completed welding.(b) The tracker with casing and straps attached.(c) The tracker placement.
Figure 3 .
Figure 3.The digital human model.(a) Skeletal model.(b) Digital human model with skinning.
Figure 4 .
Figure 4. Real-time human body and digital human model.
Figure 6 .
Figure 6.Chest joint angle over time, each color represents a category of working posture, the gray area is the excluded periods.
Figure 7 .
Figure 7.The operation posture recognition network structure.
Figure 9 .
Figure 9.The confusion matrix from five tests.
Figure 10 .
Figure 10.The sudden posture distortion when the network signal is unstable.(a) The normal posture.(b) The abnormal posture.
Table 1 .
Comparison of prices and performance of common IMUs.
Table 3 .
The duration of each operation task.
Table 4 .
Classification accuracy of the test set.
Table 5 .
Comparison of the human posture recognition results. | 6,392.4 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Evaluation of Synthetic Jet Flow Control Technique for Modulating Turbulent Jet Noise
: The use of a synthetic jet as the flow control technique to modulate a turbulent incompressible round jet was explored and assessed by numerical simulations. The flow response was characterised in terms of turbulent statistics and acoustic response in the far-field. A quasi-Direct Numerical Simulation (qDNS) strategy was used to predict the turbulent effects. The Ffowcs-Williams and Hawkings (FWH) acoustic analogy was employed to compute the far-field acoustic response. An amplification effect of the instabilities induced by the control jet was observed for some of the parameters explored. It was observed that the control technique allows controlling the axial distribution of the production and dissipation of turbulent kinetic energy, but with respect to the acoustic aspects, the appearance of a greater number of noise sources was observed, which in the far-field, resulted in an increase from 1 to 20 dB of the equivalent noise for the different operating parameters of the control technique studied.
Introduction
Injection by turbulent jet flows has multiple uses in industry including applications in heat transfer processes, drying, cleaning, aerodynamic stabilisation, combustion chambers, and propulsion systems, among others [1][2][3][4].The noise generated by these sources has been widely studied, and noise generated in industrial environments is considered to be the third-most-damaging source of noise to the human hearing system [2].The World Health Organisation (WHO) recommends the levels of exposure for noise generated by transportation systems, including aircraft propulsion systems.In the Environmental Noise Guidance [5], it is strongly recommended to reduce exposure to aircraft noise levels to less than 45 dB L den (day-evening-night-weighted sound pressure level) during the day and 40 dB L den at night.This is because exposure to higher noise levels is linked to adverse health effects and sleep problems.
Although there is no precise description of the nature of noise generated by turbulent jets, three noise components are associated with this phenomenon.These are known as turbulent mixing noise [6], broadband shock-associated noise [7], and screech tones.Nevertheless, the turbulent mixing noise is considered to be the main source of noise in subsonic jets.As the name suggests, it is generated by the turbulent mixing effects between the jet and a still medium [8], which occurs in an annular layer with high shear stress values, which increases in size as the jet advances into the medium.The region inside this annular region, where no rotational effects are present, is known as the potential core or cone of silence, and it extends until the mixing region completely fills the jet area [9].In this way, turbulent mixing noise is composed of the noise generated by the small and large turbulent scales.The Fine-Scale Structures (FSSs) are predominant close to the nozzle; they have sizes of the order of magnitude of the thickness of the mixing layer and produce the high-frequency response in the noise spectrum.On the other hand, Large-Scale Structures (LSSs) occur further ahead of the potential jet core; they have sizes of the order of magnitude of the nozzle diameter and are the ones that provide the low-frequency effects [10].Thus, for subsonic jets, the small turbulent scales are probably the most-predominant in terms of noise generation and the largest contributors at different emission angles.However, the noise generated by the large turbulent scales, which propagates in the jet direction, can become significant at small emission angles with respect to the jet axis [11].
In order to reduce jet noise in industrial applications, flow control techniques have been studied for more than 50 years [12].These mainly seek to modify the mixing effects between the injected fluid and the still fluid [13], because noise jet arises from the turbulent effects created during fluid interaction.However, any alteration in the development or nature of the jet is reflected in the efficiency of the jet to generate thrust.A large number of control techniques seek to modify the flow patterns at the jet exit.Flow control strategies are classified into passive and active control techniques [14].Passive control techniques focus on generating modifications on the nozzle geometry, and active control techniques do so by adding mass or energy to the jet and have the advantage of being able of adjust the control technique's operation parameters during operation [15].
Chevron nozzles are amongst the most-commonly used passive control techniques.They are characterised by triangular grooves at the trailing edge of the nozzle, which introduce disturbances in the high gradient zones of the jet and enhance the mixing effects between the jet fluid and still medium.This enhances the presence of small turbulent scales in the initial zone of the jet and decreases turbulence levels in the downstream region [16,17].These devices are the most-effective in terms of noise mitigation, and according to Sadeghian and Bandpy [4], their use has a minimal impact on the operation of the turbines in which they are installed.
On the other hand, some examples of active control techniques use injectors that add mass to the main jet, while also seeking to modify the flow patterns, as mentioned before.Active control techniques have been studied since the 1950s in aeronautical applications.Powell [18] studied the influence of the velocity profile at the exit of the turbulent jet on the noise generated and proposed the use of a lower velocity annular jet surrounding the main jet.The direct effect obtained with this technique was a reduction of the high velocity changes between layers of the jet.Subsequent studies focused on the use of auxiliary jets to reduce the high-velocity gradients at the jet exit.In Kurbjun [19], water was used as the injection fluid, and the jets were injected close to the nozzle with a radial orientation towards the longitudinal axis of the main jet.Although they found great efficiency in terms of noise reduction using water as the auxiliary injection fluid, they also pointed out the big practical challenge of auxiliary water jets, which would require large quantities of water to be injected into the engine jets during flight.In order to avoid the use of a different fluid, the Michael patent [20] proposes some modifications in the nozzle to make the auxiliary jets fed with the same fluid jet.These are evenly spaced on the periphery of the nozzle and can change their orientation from being fully parallel to the longitudinal axis of the jet to being slightly inclined towards the centre of the jet.These kinds of studies led to the use of active control techniques that feed from the jet fluid they are trying to control, facilitating their assembly, and also to the use of water as an injection fluid in launch applications, where noise reduction is required, especially at take-off.Recently, studies using these techniques have focused on studying the configurations, locations, and operating parameters of the auxiliary jets, such as Callender et al. [15] and Caeti and Kalkhoran [21], using fluidic injection, Rajput and Kumar [13], using downstream fluid injection, and Prasad and Morris [22] with fluid inserts.
An active control technique used in different applications is based on synthetic jets located outside of the flow boundaries and pointing across the mean flow.A synthetic jet is characterised for transfer momentum to the flow without net mass injection.It is produced by the periodic injection and subtraction of mass from an orifice induced by an oscillating diaphragm.Usually, the diaphragm's resonance frequency is driven by a piezoceramic element [23].However, there is also a special type of synthetic jet actuator driven thermodynamically by pulsed arc discharges, known as the plasma synthetic jet actuator [24].Compared with the piezoelectric ones, plasma actuators are capable of producing high-velocity (>300 m/s) and high-frequency synthetic jets (>5 kHz) [25].Among the different applications of these actuators are aerodynamic control [26], heat transfer [23], vectoring [27], and jet noise control [28,29].Although related research has demonstrated the ability of this technique to perturb flow behaviour and control it depending on the parameters used, it is not yet clear which parameter plays an essential role in controlling flows in the most-efficient way.General parameters include the addition of momentum, the location of the actuator, and the frequency of excitation.Additionally, among the research focused on noise control using synthetic jets, it has not been possible to identify how the instabilities induced by the synthetic jets can influence the main jet flow and, thus, modify the sources of noise in the flow.
In this paper, we studied the modulation effects of the noise produced by a low-Mach-number and low-Reynolds-number subsonic turbulent jet, using as an active control technique a synthetic jet injected over an annular region located before the nozzle exit in different operating conditions, to understand the effect of the transport and propagation of ordered and synthetic instabilities by the round turbulent jet.In order to study the effect of the synthetic jet as a flow control technique, a number of flow features and properties were examined in detail, including turbulence statistics, the source term of the acoustic analogy of Lighthill [30], and the far-field noise using the acoustic analogy of Ffowcs Williams and Hawkings [31], also known as the Ffowcs-Williams and Hawkings (FWH) analogy.
The remainder of this paper is organised as follows.Section 2 describes the governing equations used as the mathematical model, as well as the details of the computational model used for the validation process and the main numerical experiments.Section 3 presents the results obtained by the model for both the validation case and the cases using the annular jet control technique.This section also presents the main conclusions on the flow field and the acoustic response at the sources and in the far-field.Finally, Section 4 presents the conclusions of the present study.
Materials and Methods
A three-dimensional computational model of a subsonic jet was produced based on the model presented by Brès et al. [32].In particular, the convergent nozzle geometry and domain dimensions used in the present study match those used in Brès et al. [32].The nozzle has a length of 10-times the diameter at the nozzle exit (10D j ) and a size ratio of the inlet diameter to the outlet diameter of 10.At the inlet of the nozzle, the volumetric flow necessary to achieve the desired operating parameters at the outlet of the nozzle was configured.For the outer boundaries of the domain, the conditions of a fixed pressure of zero and a zero velocity gradient for the outflow and also for stability, and if recirculation occurs near the outlet, the normal wall velocity is applied.Additionally, a gradual increase of the size cells was applied in the regions away from the turbulent jet development region and in the direction of the domain exits, in order to increase the diffusive effects of the numerical methods used and to reduce any reflection effects that may occur.The nozzle tip was modelled as immersed within a domain that contains a still medium, as shown in Figure 1.The configuration employed allowed setting up a surface enclosing the acoustic sources, which, in turn, helped to reduce interference with the computational boundaries.Additionally, to avoid flow recirculation effects and to facilitate the entry of the jet into the domain, an exterior inlet flow was configured with a velocity 100-times lower than the jet velocity.
The operating parameters of our test case were defined from one of the cases studied by Panchapakesan and Lumley [33], in which air was used as the injection fluid.These parameters are presented in Table 1.≈11 × 10 3 -
Aerodynamic Computation
A quasi-Direct Numerical Simulation (qDNS) strategy was selected in this study to model the turbulent effects in the flow.This strategy is based on the idea of solving only the large turbulent scales using a reasonable fine mesh, as in classical Large Eddy Simulations (LES), but without explicitly modelling the smallest scales of the turbulence [34].This was selected in order to avoid the high computational costs associated with simulating all turbulent scales by Direct Numerical Simulation (DNS) approximations, which requires the use of high-order numerical methods.Thus, the use of qDNS strategies reduces the space-temporal order of resolution of the phenomenon, but captures the trends of turbulent effects in the flows of interest.Although this simulation strategy under-predicts the intensity of the turbulent effects, it is able to capture the trend of the phenomenon with a more manageable associated computational cost [35][36][37].Furthermore, another relatively well-known advantage of this strategy is that, unlike DNS approaches, where modelling flows in complex domains and situations is only feasible for moderate Reynolds numbers and simple flow geometries, the qDNS strategy allows modelling cases with complex geometries, with comparatively low computational costs, provided that a reasonable and efficient spatial discretisation is employed.
The numerical model selected for the present work was based on the Finite-Volume Method (FVM), using the weighted Crank-Nicholson method, employing a weight selected to ensure unconditional stability, and the gradient and divergence terms were discretised using the second-order Gaussian linear scheme, while the Laplacian terms were discretised using the Gaussian linear corrected scheme, which also aims to provideaccuracy for both interpolation and the surface normal gradient components required for the Laplacian terms.Finally, linear schemes were used for point-to-point interpolation.This model was solved using the open-source software OpenFOAM.To approximate the continuity and momentum equations for an incompressible flow in a transient state, the solver called pimpleFoam was selected.This is based on the PIMPLE algorithm, which is a combination of the Semi-Implicit Method for Pressure Linked Equations (SIMPLE) and Pressure Implicit with Splitting of Operators (PISO) algorithms for pressure and velocity coupling and can be understood as the use of the SIMPLE algorithm to find a steady-state solution at each time step [38,39].Convergence values for velocity and pressure of 5 × 10 −6 and 5 external correctors of the algorithm were used for the solver.
Aeroacoustic Computation
The noise at the sources and far-field was calculated using acoustic analogies intended to evaluate the acoustic response of the study cases.The acoustic sources were calculated using the right-hand side of the acoustic analogy of Lighthill [30], which is presented in Equation ( 1), an expression that is obtained from the manipulation of the conservation of mass and momentum equations.
Here, ρ and c ∞ denote the density and the velocity of sound in the fluid, p the acoustic pressure, σ ij the viscous stress tensor, and δ ij the Dirac delta function.If the flow is considered incompressible, the velocity fluctuations are dominated by turbulence and the viscous terms, represented by σ, can be neglected.Additionally, for isentropic flows, it is valid to assume that p = c 2 ∞ ρ .Accordingly, it is possible to approximate the tensor T ij to the term ρu i u j in Equation ( 2).After some simplifications and taking into account the conservation of mass for incompressible flows, the source term can be finally expressed as On the other hand, the FWH equation was used to calculate the far-field acoustic response.This analogy model was implemented in OpenFOAM in the library libAcoustics, a computational tool developed by Epikhin et al. [40] that works in parallel with the fluid flow solvers.In this way, after a given number of iterations of the flow solver used, the sound pressure levels at the measurement points are calculated taking into account the time delay between the moment of noise generation from the turbulent flow and the moment it is perceived at the far-field.
The FWH equation was developed by Ffowcs Williams and Hawkings [31], and it has a generalised form in order to be able to describe the phenomenon of acoustic generation by turbulent flow in a domain with discontinuities, for instance when moving solid surfaces are immersed in a turbulent flow.The FWH equation is presented in Equation ( 4), and similar to the Lighthill acoustic analogy, it has an non-homogeneous wave equation form.The first term on the right-hand side represents the sources in the form of quadrupoles in the flux region.The second and third terms are typically known as the Loading Source Term (LST) and Thickness Source Term (TST).These terms represent the sources in the form of dipoles and monopoles, respectively.Furthermore, they are defined over the surface f (x, t) = 0 by a Dirac delta function, δ( f ).The terms L i and Q n are defined in Equations ( 5) and ( 6), respectively.
In these expressions, v i is the surface velocity, u i is the fluid velocity, H s is the step function that defines the surfaces that enclose the noise sources in the flow field, and the terms L i and Q n are known as the loading source term and the thickness source term and represent the dipoles and monopoles sources, respectively.Equation ( 4) is the general form of the FWH equation and is also known as the formulation with permeable surfaces or control surface, where the acoustic sources are enclosed by the surface f (x, t) = 0, which should not represent an obstacle in the flow.If f (x, t) = 0 corresponds to a solid surface, then u i = v i , and the formulation of the FWH equation for moving impenetrable surfaces is obtained.If this surface is stationary, u i = v j = 0, and the FWH equation reduces to the equation of Curle [41].
In this approach, the calculation of the acoustic response of sources located at Y on points located at the far-field X is achieved by using the so-called integral formulation of the FWH analogy, shown in Equation (7).In this expression, the first term is evaluated within the volume where the turbulent flow develops, and the second and third terms are evaluated on the surface that separates the regions of the domain.
Computational Model
The computational domain was discretised using a structured mesh adapted to the shape of the nozzle aiming to obtain smaller cells in the regions close to the jet exit and the nozzle walls.This mesh was developed using the software Gmsh [42] and then imported into OpenFOAM.
The mesh has approximately 3.2 million cells.In Figure 2, the cell sizes are presented over the line in the axial direction from the nozzle edge and over the radial line at the nozzle exit.To appreciate the quality of the prescribed computational mesh, our mesh density was compared with the meshes developed by Bogey et al. [43] and Brès et al. [32], which investigated turbulent jets using LES turbulence models with a Ma number of 0.9 and Re numbers of 1 × 10 5 and 1 × 10 6 , respectively.In the region closer to the nozzle, a mesh resolution similar to the one presented by the mentioned studies was achieved; however, it is important to note that the Re number used in this work is one order of magnitude smaller, and no turbulence model was used.In this way, it was inferred that the mesh employed was able to capture the turbulent effects in these regions in the same way as was performed in the mentioned investigations for higher turbulence levels.In addition, the necessary resolution was achieved for the region where the influence of the perturbations added by the synthetic jet was observed.Figure 3 shows a slice of the mesh at the initial part of the jet.Additionally, in order to use the FWH analogy, the surface containing the turbulent flow was constructed with a cylindrical shape with a variable diameter, which was oriented in the direction of the jet and was located at the jet exit.The dimensions of the cylinder in terms of the nozzle diameter (D j ) were a length of 35D j , an initial diameter of 4D j , and a final diameter of 7D j .These characteristics were selected taking into account the information presented in Brès et al. [32] and the recommendations discussed by Mendez et al. [44].The noise generated by the jet was computed on an array of points representing far-field locations according to Figure 4. Three arcs of 9 points each were configured, with distances of 15D j , 30D j , and 45D j from the jet exit and spanning angles from 20 • to 100 • .
Flow Control Technique
The proposed control technique consists of a synthetic jet injected over an annular region of the nozzle, located at a γ distance from the nozzle outlet.The location of the synthetic jet injection ring over the jet nozzle is shown in Figure 5.Both round and fluctuating jets can be described using the Strouhal number (St), a dimensionless number used to analyse oscillation phenomena in flows and representing the ratio between the inertial forces produced by the fluid instability and those produced by changes in velocity.In the present study, different operating conditions of the synthetic jet were explored, but only one operating condition of the main round jet.Thus, in order to simplify the analysis, a relationship between these dimensionless parameters was established, and all the analysis and presentation of the results were carried out using this relationship between the Strouhal numbers.
Equation (8) presents the Strouhal number for the round jet (St j ), determined using the nozzle diameter (D j ), the jet injection velocity (U j ), and a frequency ( f j ), which was determined using the thickness of the boundary layer inside the nozzle (δ) and the maximum value of the turbulent intensity (u rms ) on the centerline of the jet.
On the other hand, Equation ( 9) presents the Strouhal number used for the synthetic jet (St sj ) obtained using the oscillation frequency of the synthetic jet, the thickness of the injection ring (αD j ), and the oscillation amplitude of the synthetic jet (βU j ).Factor α represents the ratio between the injection ring thickness and the nozzle diameter, whereas β is the ratio between the oscillation amplitude of the synthetic jet and the injection velocity of the round jet.Therefore, the Strouhal number for the synthetic jet can be expressed as Using the previous definitions for the Strouhal number for the round jet and the synthetic control jet, the ratio of St can be expressed as a frequencies ratio, as presented in Equation (10).
In the present work, constant values were used for the parameters α, β, and γ, specifically α = 0.1, β = 0.1, and γ = D j , with D j representing the round jet diameter at the exit of the nozzle.The numerical experiments were established by varying the oscillation frequency of the synthetic jet.In this way, the R St value was reduced to the ratio between the frequencies of the synthetic jet and the round jet.Table 2 presents the R St values and their corresponding f sj frequencies employed in the numerical experiments.
Results and Discussion
The results presented below were obtained from the instantaneous, spatially averaged, or time-averaged values of the cases studied once they reached a quasi-steady-state, where a periodic behaviour was present.For this purpose, the simulations were allowed to run for more than 30 jet times, which in terms of the period of the oscillation in the lowest frequency-controlled case means 25 oscillations of the synthetic jet.
Validation
In order to determine the validity of the results achieved with our computational model, the behaviour of the turbulent jet was compared with experimental investigations and numerical results obtained by previous works using LES and DNS.The typical profiles included the mean velocity, the average velocity fluctuations, and the production and dissipation terms of the turbulent kinetic energy transport equation.These profiles were obtained over the axial lines at r/D = 0 and r/D = 0.5 and over radial lines located at different axial positions.The radial profiles were normalised using the velocity at r/D = 0 of each axial location and the value of the radial distance at which the velocity decays to half of its value (r 1/2 ).The normalisation can be performed based on the self-similarity characteristic of turbulent jets in the developed region of the jet.
The profiles of the axial component of the time-averaged velocity over the axial jet direction are presented in Figure 6.It can be observed for the centerline that the case captured the trend of the reference profiles; it presented a potential core length similar to those reported by Brès et al. [32] and Shin et al. [45] and presented a similar decay behaviour at x/D j ≈ 2, as presented by the cases of Todde et al. [46].On the other hand, the profile on the lip line (Figure 6b) showed a similar behaviour to the jet with Re = 1 × 10 6 from x/D j ≈ 5.The behaviour of the axial component over the radial direction of the jet is presented in Figure 7 and was compared with the analytical function proposed by Sautet and Stepowski [47], which is presented in Equation (11).It can be seen that the result obtained presented a profile that, after r/r 1/2 = 1, had an increase of approximately 0.05, which was a consequence of the low velocity external flow, which had a considerable value when it was compared to the profiles in the last axial positions.The turbulent intensity over the lines in the axial direction located at r/D j = 0 and r/D j = 0.5 are presented in Figure 8 and compared with the cases of Re = 1 × 10 6 and Ma = 0.9 of Brès et al. [32], Re = 1 × 10 4 and Ma = 0.8 of Bonelli et al. [48], Re = 7.3 × 10 3 and Ma = 0.3 of Shin et al. [45], and Re = 6.7 × 10 3 and Ma = 0.01 of Todde et al. [46].As can be seen, even in the cases reported in the literature, there were considerable differences in the shape of the profiles and the magnitudes reached, especially for values of x/D j smaller than 10.For the profile on the central line (Figure 8a), a peak at x/D j ≈ 5 is observed, which coincides in position with the cases explored by Todde et al. [46] and presents closer values to those reported by the case of Bonelli et al. [48].Similarly, for the line that goes from the edge of the nozzle (Figure 8b), there is a similarity with the case of Bonelli et al. [48].With respect to the velocity fluctuations, the components of the Reynolds stress tensor over the radial direction of the jet were calculated, and the self-similarity profiles were constructed.These were compared with the data reported by Hussein et al. [49] and Panchapakesan and Lumley [33].The profiles of the four components of this symmetric tensor are presented in Figure 9.As can be seen, there are similarities in the shape of the profiles, but they under-predict the reported values.This was a consequence of the way in which the turbulent effects were modelled in this work.Although a sufficiently fine mesh was developed in the critical zones of the jet to capture the large-scale turbulent effects, the failure to capture all turbulent scales with the mesh and the use of low-order schemes led to the attenuation of the turbulent effects and, thus, to a decrease in the values reporting the turbulence intensity.
An additional measure of the prediction capabilities of the implemented model was performed in terms of the profiles of the production and dissipation terms of the Turbulent Kinetic Energy (TKE) transport equation.These profiles are presented in Figure 10a and were normalised using the value U 3 x,0 /r 1/2 .As can be seen, the TKE production term was under-predicted, although it exhibits a similar shape to the profile reported by Panchapakesan and Lumley [33].Interestingly, the dissipation term presented values approximately one order of magnitude lower than those reported experimentally.Noteworthy also, regarding the profile shape, there seemed to be a loss of the trend near the jet centre and at r/r 1/2 ≈ 1.2.Subsequently, the FWH acoustic analogy was used to compute the far-field noise for the round jet studied (Ma ≈ 0.1) and for another case with Ma = 0.3.The Ma = 0.3 case was used to validate the model with the results presented in related studies, which generally explored jets with Ma ≥ 0.3.Additionally, the self-similarity profiles of the acoustic spectra described by Viswanathan [50] fit the numbers of Ma ≥ 0.3 Far-field noise spectra measured at 30 • , 60 • , and 90 • , with respect to the jet injection direction and located 30D j from the jet exit are presented in Figure 11 and compared with self-similarity acoustic spectra and data reported using Ma = 0.3 from Jordan et al. [51].As can be seen, the spectra obtained at the three measurement points show very similar behaviour.This indicated that, in the prediction phase of the turbulent effects, not all the existing scales were captured.Consequently, only the large turbulent scales generated sources of noise that tended to propagate uniformly in the far-field.However, for the case of Ma = 0.3 (see Figure 11) at the range between St = 0.4 and St = 1.2, the obtained spectra presented a similar slope and magnitudes close to the comparison spectra.
From the validation analysis, it is possible to see that the model under-predicted the magnitude of the fluctuations caused by the turbulence phenomena, but it provided results that allow understanding the flow characteristics and energy transfer at the main flow scales.In addition, the model predicted the trend of the acoustic spectra in the far-field, and it showed sound pressure levels consistent with those presented in related studies.In this way, we proceeded to use the developed model as a comparative tool by applying the control technique with different operating parameters.
Flow Field
The instantaneous flow fields allow appreciating the nature of the jet when applying the control technique, as we can identify the mixing layer that is generated between the jet and the stationary fluid, and how such a layer is modified depending on the operating parameters of the control technique.In the Supplementary Materials section there are 3 videos that allow to visualize the instantaneous behavior when the control technique is applied.The magnitude of the velocity gradient is presented in Figure 12.As can be seen, the mixing layer of the non-controlled round jet case is stable up to x/D j ≈ 2. After this point, the first characteristic instabilities of the jet start to appear, inducing the onset of vortices, which increase in size until they dissipate in small turbulent scales in the attenuation region of the jet.
When applying the control technique using the lowest value of the R St parameter, it was observed that the mixing layer was no longer stable, and instabilities appeared from x/D j = 0. Thus, the instabilities that initiated the turbulence processes were not generated by the interaction of the injection fluid with the stationary fluid, but by the synthetic perturbations introduced by the control technique.However, as the parameter R St was increased, the initial mixing layer tended to be restored, and for R St = 2.0, a more stable layer similar to that of the non-controlled round case was reached.For the case of R St = 1.5, it was observed that the injected pulses tended to organise themselves into a thicker mixing layer with internal perturbations, but this was maintained up to x/D j = 2. Thus, for low values of R St , the jet behaviour was significantly perturbed, and for higher values of R St , the control technique acted as a bulge in the jet nozzle, which did not have a considerable effect on the development of the jet, at least from the point of view of the instantaneous fields.In order to visualise the coherent structures generated in the regions where vorticity effects are appreciable, the instantaneous fields of the Q-criterion were selected, and the iso-surfaces were plotted for the regions where the Q-criterion is higher than 1 × 10 8 ; see Figure 13.The Q-criterion is one of the multiple methods used to identify vortices in the flow field.This criterion relates the rotation rate with the strain rate and is presented as a scalar field, where positive values represent regions where vorticity dominates and negative values where the strain rate is dominant.
As can be seen, these coherent structures appeared from x/D j ≈ 2 for the noncontrolled round case (Figure 13a) and initially had the shape of a ring, which increased in diameter as the jet developed.This particular structure was maintained until x/D j ≈ 3, where smaller structures started to be created, which tended to reduce in size, and eventually disappeared at x/D j ≈ 8.5.On the other hand, for the four cases where the control technique was used, the structures appeared in the injection region of the synthetic jet and, depending on the parameter R St , attenuated immediately or developed similarly to the non-controlled case.For the case of R St = 0.5, the appearance of the smaller structures occurred before the jet left the nozzle.During the development of the jet, smaller structures were observed to appear and attenuated mostly before x/D j ≈ 7.In a similar way as occurred with R St = 1.0,only in this case, a structure was formed outside the nozzle, which was maintained until x/D j = 0.5, but from that point onwards, a behaviour similar to R St = 0.5 was observed.For the case R St = 1.5, we observed the appearance of small structures in the region from x/D j ≈ 0 to x/D j ≈ 2, which coincided with the internal perturbations of the mixing layer seen in Figure 12d.Thereafter, we observed a behaviour similar to the round case.Finally, something similar occurred for the case with R St = 2.0, but in this case, there were no structures in the initial region of the jet, which in terms of the appearance of coherent structures, was very similar to the non-controlled case.
To visualise the effect of the use of the control technique on the intensity of the turbulent effects, the fields of the four Reynolds stress tensor terms were generated.This was performed by averaging the Reynolds stress tensor in cylindrical coordinates over 128 planes located uniformly in the angular direction.The fields of the u u , v v , w w , and u v terms are presented in Figures 14-17, respectively.
As can be seen, the control technique produced fluctuations that appeared and subsequently attenuated at positions closer to the nozzle exit, especially for the cases of R St = 0.5 and 1.0.It is important to note that the control technique injected fluctuations into the flow, which depending on the R St parameter, made them behave as a jet at a more advanced stage of development, so that the interaction with the stationary surrounding fluid was more intense and at shorter distances.It can be seen for the u u term plot (see Figure 14) how the fluctuations at the jet edge appeared at the nozzle exit, but in the centre of the jet, they were delayed to reach a value of 0.2 at x/D j ≈ 6 for R St = 0.5 and at x/D j ≈ 7 for R St = 1.0.This suggested that the synthetic fluctuations at the jet edge caused most of the jet energy to feed the high-intensity fluctuations at the edge and made them take longer to reach the jet centre.On the other hand, for the cases of R St = 1.5 and 2.0, the behaviour was similar to the round one, although a displacement of the region of fluctuations towards the jet exit was also observed.
In the v v term (see Figure 15), a significant attenuation of the intensity was observed, although the recovery effect of the round jet was still present for the highest value of R St .For the case of R St = 1.0, the fluctuations in the radial direction were attenuated to the point of only perceiving the region with a value less than 0.4, suggesting that, for the oscillation frequency of this case, the instabilities tended to be organised in such a way that they were drastically reduced in the radial direction and compensated in other terms, such as w w and u v .This can be seen in Figure 16, where for the same case and for the case of R St = 0.5, the region with higher intensity spans a larger region and, additionally, reaches higher values.This did not occur for the case of R St = 1.5, where the attenuation of the magnitude was generated to the point of reducing the larger region of 0.8 to three small regions spread between x/D j ≈ 5 and x/D j ≈ 9. Finally, for the shear stress term u v , it can be observed how the above-mentioned appreciations for the normal stresses u u and v v were somehow reflected in this field and indicated the consistency of the post-processing.The two terms considered in the turbulent kinetic energy transport equation were obtained using the same procedure used for the Reynolds stress tensor components.The production fields (P) obtained for the different cases explored are presented in Figure 18.The production in the round jet showed a shape of an annular region that spanned the region in the axial direction from x/D j ≈ 1.8 to 10.5 and in the radial direction from r/D j ≈ 0.25 to 0.7.Using the control technique, the region enclosed by the 0.1 contour maintained its shape and size, and for the cases with low R St , it shifted towards the jet exit.Similarly, the region enclosed by the value of 0.8 had a larger size for the cases with a low value of R St , and for those with a larger R St , a similar location to the round case was observed with a similar behaviour.
The dissipation term ( ) is presented in Figure 19, and this term presented higher values for the cases with a low value of R St .For these cases, it can be observed how the contour enclosed by 0.9 covers a region of approximately x/D j = 3, unlike the round case and those with high values of R St , where this region has a smaller size and is located later in the development of the jet.This suggested that, for low values of R St , the transport processes of the turbulent kinetic energy towards the smaller scales and eventually towards internal energy occurred much faster and with greater intensity, generating after this zone a region with low dissipation values, which is maintained for a longer time.
Acoustic Response
The acoustic sources originating in the turbulent jet were calculated using the Lighthill equation (see Equation ( 3)).Their instantaneous value after the jets reached a fully developed quasi-steady-state, as presented in Figure 20.
For the different cases, there were two regions that differed in the way the acoustic sources were generated.In the initial part of the jets, there were source packages that fluctuated between positive and negative values, which coincided with the location of the mixing layer of the jets (see Figure 12).This was followed by the second zone, which was located in the region where the jet was fully developed.It was observed that the second zone approached the region closer to the nozzle in the cases with a low value of R St (0.5 and 1.0), and for the other two cases, it presented a behaviour similar to the round case.
Related to the acoustic source packages that were located on the initial mixing layer of the jet, it was observed that, for the round case, they appeared at x/D j ≈ 1, and when the control technique was used, acoustic sources appeared even before the jet exited the nozzle.For the cases with R St equal to 1.5 and 2.0, these acoustic sources appeared as organised packages during the initial region of the jet.For the case of R St = 2.0, three zones with different behaviour within the initial region of acoustic sources can be observed.First, there were organised packets with a thickness of approximately 0.08D j each, which changed sign and appeared up to x/D j ≈ 0.7.Then, there was a zone of silence up to x/D j ≈ 1.1 and, finally, another zone with organised packages with a similar appearance as in the round jet case.This suggested that, for this particular case, there was a region of acoustic sources located near the nozzle exit, which was generated by the control technique and then another region of acoustic sources generated by the turbulent effects common for this type of flow.The noise spectra obtained for some of the measurement points are presented in Figure 21.As can be seen, the use of the control technique generates a high-frequency pure tone and generally increases the sound pressure level captured at the different measurement points.It is also observed that the pure tone increases in frequency for higher values of R St .It can be observed that the noise captured by our model decreases with distance, although for the peaks of the pure tones, there is no clear decay effect.This is a typical behaviour of pure tones when propagated from different sources, because, depending on the separation of the sources, there may be points where the tone is attenuated or amplified, depending on the phase difference between the different sources.
Additionally, we observed the appearance of peaks with frequencies of multiples of the pure tones and lower amplitude, which are known as harmonics.For the cases of R St = 2.0 and R St = 1.5, up to 2 harmonics were captured, and for R St = 1.03 and R St = 0.5, up to 10 harmonics were captured.
Taking into account that the additional noise added to the acoustic spectra by the control technique must be related to the oscillation frequency of the synthetic jet, the spectra were normalised with the oscillation frequency and are presented in Figure 22.It can be seen how the use of this control technique generated pure tones with a frequency of 0.5, the oscillation frequency of the synthetic jet.This suggested that injecting organised fluctuations into the jet promoted the appearance of acoustic sources that oscillated with the frequency of the control technique and with a greater amplitude than the rest of the sources that were generated by the turbulence phenomena typical of this type of flow.As can be seen, the round jet case presented a smooth profile with higher values at lower angles, as presented in the literature.On the other hand, the cases using the control technique presented an unusual behaviour.No attenuation of the signal was observed as the measurement distance increased; no smooth profiles increasing in magnitude at small angles were observed; below trend peaks appeared at different angles for different measurement distances.
Conclusions
In the present work, a computational model was built and validated with the aim of modelling the use of a synthetic annular jet as a flow control technique for an incompressible turbulent round jet.The results showed that the model employed had the ability to capture the tendency of the turbulent phenomenon, especially in the vicinity of the exit of the jets.The fluid was injected and subtracted alternately uniformly over the entire annular region, and taking into account that the fluid for the operating conditions was considered incompressible, a piston effect was generated in the region upstream of the nozzle outlet.This affected the portion of fluid at the end of the nozzle, which increased its velocity when it was in the injection phase of the synthetic jet, and on the contrary, it decreased its velocity when it was in the subtraction phase.In this way, the pulses injected inside the nozzle were immediately reflected throughout the jet exit area.Although it was found that this effect was most noticeable at the edge where the jet mixing layer was modified by the stationary fluid, at the jet centreline, these fluctuations were also noticeable (see Figure 14a), but those were quickly attenuated within the potential core of the jets.
The use of the control technique inside the nozzle caused the mixing layer to stop behaving in a stable way, and even a layer detachment seemingly appeared in the last section of the nozzle.This was noticeable for cases with low values of R St , where the oscillation period was longer and, therefore, the phase of positive and negative displacements injected and subtracted more fluid.For the cases with higher values of R St , it was observed that the control technique generated an alternating effect between throttling and flow relief, which was not sufficient to generate a significant modification effect of the initial mixing layer, at least from the point of view of the flow behaviour.
It was also observed that, depending on the value of R St , the location of the regions with high turbulence intensity values were modified.Thus, for the cases with low values of R St , the jets behaved similar to the ones presented in investigations related to higher Re numbers, which were characterised by instabilities in the initial annular region from the beginning of the jet injection.On the other hand, for the cases with higher values of R St , the behaviour was similar to the round case developed.
With respect to the acoustic response of the jets studied by applying the control technique, it was found that they all produced a higher sound pressure level in the farfield.This was true for the region of the spectrum where St > 0.1.The round jet was characterised by acoustic sources with considerable magnitude from the region where the mixing layer started to present instabilities.Using the control technique, the appearance of acoustic source packets was promoted.These acoustic source packets exhibited a size related to the wavelength of the oscillation frequency of the synthetic jet.
It was observed that the turbulent jet generated an amplification effect of the instabilities added before the jet exit.This amplification was reflected in the noise produced in the far-field.This coincides with the experiments mentioned in the discussion of Lighthill [30], where the high sensitivity of turbulent jets to sound waves from sources external to the flow was mentioned.
Additionally, it was identified that the noise added by the use of the control technique was presented as a pure tone with a frequency of 0.5-times the oscillation frequency of the synthetic jet.Although these pure tones tended to attenuate with increasing measurement distance, they also exhibited atypical behaviour in the directivity profiles.This was associated with the fact that acoustic sources with a given frequency propagated in the far-field, where the interaction with the signal coming from another source with the same frequency may present attenuation or amplification effects.It was observed that these attenuation effects (Figure 23) were associated with the appearance of at least one peak, which changed in steepness as the measurement distance increased for the different cases studied.
Finally, it is possible to conclude that synthetic annular jets can modulate the turbulence levels along round jets, which can be used in flow control strategies.In terms of noise emission, the control technique increased the noise levels generated by the jet, although this increase was smaller for lower values of R St .Further work is needed to explore the effect of other operating variables and geometrical constants, as well as the validity of our conclusions at higher Reynolds numbers and Mach numbers.
Figure 1 .
Figure 1.Schematic view of the computational domain used and the model conditions.
Figure 3 .
Figure 3. Detail of computational mesh at the initial region of the jet in terms of nozzle diameters.
Figure 4 .
Figure 4. Location of measurement points in the far-field.
Figure 5 .
Figure 5. Schematic of the synthetic jet configuration as a turbulent jet control technique.
Figure 8 .
Figure 8. Normalised turbulence intensities over centerline (a) and lip line (b).Description in the legend of Figure 6; green lines from Bonelli et al. [48].
Figure 11 .
Figure11.Noise spectra on three measurement points located at 30D j for subsonic turbulent jets at Ma ≈ 0.1 (a) and Ma = 0.3 (b) (• experimental data from Jordan et al.[51]; dashed lines are the self-similar functions developed by Tam et al.[10]).
Figure 22 .
Figure 22.Spectra of sound pressure levels in dB versus frequency normalised to the oscillation frequency of the synthetic jet in each case study at 30 • 15D j (a), 30 • 45D j (b), 90 • 15D j (c), and 90 • 45D j (d).
Figure 23 CanonicoFigure 23 .
Figure23shows the directivity of the noise captured in the far-field.The figure was constructed by obtaining the equivalent noise for each of the 27 measurement points,
Table 2 .
Oscillation frequencies of the synthetic jet. | 11,288.2 | 2023-03-27T00:00:00.000 | [
"Engineering",
"Physics"
] |
Cytogenetic divergence between two sympatric species of Characidium ( Teleostei , Characiformes , Crenuchidae ) from the Machado River , Minas Gerais , Brazil
Cytogenetic studies were performed on two sympatric species of Characidium, C. gomesi and C. cf. zebra, from the Grande River basin, Minas Gerais State, Brazil. Although both species had a chromosome number of 50 with a karyotype exclusively consisting of metaand submetacentric chromosomes, interspecific diversity was detected concerning the size of the two first chromosome pairs of the karyotypes. Active nucleolus organizer regions (NORs) were located at the terminal position on the long arm of the 17 pair of C. gomesi and at subterminal position on the long arm of the 23 pair of C. cf. zebra. For both species the fluorochrome CMA3 stained only the NOR-bearing pair of chromosomes. The heterochromatin pattern also showed some differentiation between these species restricted to the centromeric or pericentromeric region of C. cf. zebra and practically absent in C. gomesi. These data are discussed concerning chromosome diversification in this fish group.
Introduction
The genus Characidium (family Crenuchidae) is made up of small fish which rarely exceed a standard length of 10 cm and which usually occur in small headwater streams.However, little is known about the chromosomes of the fish in the Crenuchidae despite the fact that this family includes about 80 nominal species (Buckup, 1993a), with the genus Characidium alone including 59 nominal taxa which makes it the most diverse genus in the Crenuchidae (Buckup, 1993a, b).
Since there is no cytogenetic data on Characidium from the Minas Gerais region of Brazil, the main objective of the present study was to describe the karyotype structure of two sympatric Characidium species from the south of Minas Gerais State.The data obtained are discussed concerning some aspects related to the chromosomal evolution of this genus.
Material and Methods
A cytogenetic survey was performed on two sympatric Characidium species: 14 female and 6 male (n = 20) Characidium gomesi and 10 female and 3 male (n = 13) Characidium cf.zebra collected in the Machado River at 22°04.471'S; 46°02.810'W near the town of São João da Mata in the Brazilian State of Minas Gerais.Voucher specimens are deposited at the fish collection of the Brazilian National Museum (Museu Nacional Rio de Janeiro, MNRJ) under catalog numbers MNRJ 28408 for C. cf.zebra and MNRJ 28411 for C. gomesi).
Mitotic cells were obtained from gill and kidney tissues by the technique described by Foresti et al. (1993).Chromosome morphology was determined on the basis of arm ratios as proposed by Levan et al. (1964) and the chromosomes were classified as metacentric (M) and submetacentric (SM) and were paired in decreasing order of size.C-banding was performed by the method of Sumner (1972), and silver-staining of the nucleolus organizer regions (Ag-NORs) was performed by the technique of Howell and Black (1980).Chromomycin A 3 (CMA 3 ) staining was performed by the method of Schweizer (1980).
Results
Giemsa staining showed that the specimens of both Characidium species investigated presented the same basic karyotype of 2n = 50 (32M + 18SM) with the fundamental number (i.e. the number of chromosomal arms) being equal to 100 (Figures 1a and 1b).However, the first metacentric chromosome pair of C. cf.zebra was considerably larger than the second pair while in the C. gomesi specimens the first and the second metacentric chromosome pair were similar in size.In C. cf.zebra we observed a secondary constriction at the subterminal position on the long arm of the 23 th chromosome pair (Figure 1b).No chromosome differences were observed between males and females of both species.
The Ag-NOR analysis of both Characidium species showed the presence of only one chromosome pair with active NORs.Terminal NORs were observed on the long arm of a pair of major submetacentric chromosomes (pair 17) in C. gomesi (Figure 2a) and on the subterminal position on the long arm of the small-sized submetacentric (pair 23), coinciding with the region of secondary constriction in C. cf.zebra (Figure 2b).In both species, chromomycin A 3 showed fluorescence only at the NOR sites (Figures 2c and 2d).
The C-banding pattern showed heterochromatic blocks at the centromeric or pericentromeric regions of all Characidium cf.zebra chromosomes and in the subterminal position of the 23 th chromosome pair (Figure 3a) while C. gomesi showed small heterochromatin blocks distributed at the centromeres and telomeres of a few chromosomes, including the telomeric region of the long arm of the 17 th chromosome pair (Figure 3b).
Discussion
The Characicium cf.zebra and C. gomesi specimens from the Machado River presented the same diploid chromosome number of 2n = 50, distributed as 32 metacentric and 18 submetacentric chromosomes.This karyotype macrostructure is the same as that observed in the majority of the other Characidium species or populations analyzed, with the exception of C. pterostictum from the Carlos Botelho Ecological Station in the Brazilian state of São Paulo (Miyazawa and Galetti-Jr., 1994) and C. lauroi from the Grande stream, also in São Paulo state, (Centofante et al., 2003) which presented one subtelocentric chromosome pair and few differences in the number of meta-submetachromosomes.
The C. cf.zebra and C. gomesi specimens studied by us could be easily separated by the fact that the C. cf.zebra first metacentric chromosome pair is larger than the second pair while the C. gomesi first metacentric pairs were homogeneous in size.Except for Characidium.sp.cf.Characidium alipioi and C. lauroi (Centofante et al., 2003) and the C. gomesi described in the present paper, the great majority of the Characidium species so far studied have a first metacentric pair which is considerably larger than their second metacentric pair.According to Buckup (1993b) C. zebra are morphologically primitive and occupy a basal position in the phylogeny of Characidium, so the presence of a large first metacentric pair in Characidium cf.zebra could be considered primitive for the Characidium species that present it, being derived the size homogenicity between the 1 th and 2 th metacentric chromosome pairs.A similar karyotypic pattern has been observed in the cis-Andean genera Trichomycterus, made up of small fish that, like Characidium, are usually found isolated in the headwaters of small rivers.Sato et al. (2004) has shown that the Trichomycterus species florensis, reinhardti and auroguttatus form a group in which the first metacentric pair is considerably larger than the second metacentric pair, while Trichomycterus sp.aff.Trichomycterus itatiyae and Trichomycterus davisi form a group for which the first and second metacentric pairs are about the same size, although larger than the other metacentric pairs.The cytogenetic information available in the literature for other Characidium species reports some species with a single chromosome pair bearing Ag-NORs and yet other species with multiple ribosomal sites.Variation in the number of chromosomes bearing Ag-NOR regions is a common feature among characid fishes which has been reported by a number of authors using the Ag-NOR technique (e.g.Almeida-Toledo and Foresti, 1985;Wasko et al., 1996;Jesus and Moreira-Filho, 2003).Maistro et al. (1998a) observed multiple Ag-NORs in some Pardo River Characidium specimens, however subsequent 18S rDNA FISH analysis of the chromosomes of the 4 specimens showed only one pair bearing ribosomal sites (Maistro et al., 2004).The variable number of ribosomal sites found in the Pardo River population could suggest the occurrence of inter-individual numerical polymorphism of the NOR sites, as has been observed for the Prochilodus lineatus chromosome complement (Maistro et al., 2004;Jesus and Moreira-Filho, 2003).
The majority of the Characidium species analyzed by us showed Ag-NORs located on small or medium-sized submetacentric chromosomes and, with exception of C. lauroi, the Ag-NORs were always located on the long arm.It thus seems that in addition to having a diploid chromosome number of 2n = 50 the location of Ag-NORs almost exclusively on the long arm of the chromosomes is another feature common to Characidium.The fact that sympatric Characidium species always showed Ag-NORs on different chromosome pairs (Centofante et al., 2001;2003;present paper) indicates that NOR location is an important cytotaxonomic tool for this group.It may be that FISH analysis with rDNA probes could help in the better understanding of the pattern distribution of ribosome sites on Characidium chromosomes.We found that the Ag-NOR sites were positively stained by the CMA 3 fluorochrome (Figure 2b), suggesting that the rDNA loci of both Characidium species studied in may contain spacer sequences or NOR-associated heterochromatin rich in GC base pairs.On the other hand, these data suggest that the C-band positive segments found in both species are not rich in GC base pairs, a characteristic which has been commonly found in several fish species (Amemiya and Gold, 1986;Maistro et al., 2002;Fonseca et al., 2003;among others).
The C-banding pattern could also differentiate the two sympatric Characidium species studied by us.In both species heterochromatin preferentially appeared at the centromeres or in the pericentromeric regions but C. cf.zebra presented more heterochromatin than C. gomesi, which also presented a few telomeric C-band positive blocks.Centromeric and/or pericentromeric heterochromatin has been observed in the majority of the Characidium species studied and, in smaller quantities, in C. gomesi from the Paiol Grande stream (Centofante et al., 2001) as well as in the C. sp.cf.C. gomesi from the Pardo River (published as C. cf.fasciatum by Maistro et al., 1998a) and C. gomesi studied by us.The available C-banding data shows that the Characidium taxa most related to gomesi present fewer heterochromatin-bearing chromosomes than other Characidium species.Maistro et al. (1998a) described a ZZ/ZW sex-chromosome system for Pardo River C. gomesi, where the Z and W chromosome have the same shape and size and are differentiated from each other by the total amount of heterochromatinization of the W chromosome. Centofante et al. (2001) found a similar sex chromosome system in C. gomesi from the Paiol Grande stream, these fish also presenting high heterochromatinization of the W chromosome but in this case this chromosome was small in comparison to the Z chromosome.Since we detected no sex chromosome differentiation in C. gomesi and considering the fact that these fish live in headwaters, show low geographical mobility and form local populations, attributes can facilitate speciation, we feel that C. gomesi from the Machado River could represent a new Characidium species.
On the basis of the cytogenetic data available on Characidium species we suggest that the general trend of this group towards karyotypic diversification is similar to other fish groups that maintain a conserved karyotype macrostructure but that are quite divergent in terms of NOR location, heterochromatin distribution and the occurrence of sex chromosomes (Koehler et al., 1997;Pereira et al., 2002; for example).The specific characteristics observed in the chromosome structure of the genus Characidium are probably due to the fact that this nominal species is composed of isolated populations found in the headwaters of small tributaries, which probably followed different routes of chromosomal diversification.These characteristics make Characidium an excellent group for supplementing the evolutionary studies which have been carried out with Astyanax scabripinnis and Trichomycterus that have similar ecological characteristics (Moreira-Filho and Bertollo, 1991;Maistro et al., 1998b;Borin and Martins-Santos, 1999).Since only a few Characidium species have so far been investigated, further studies on other Characidium species and populations are necessary in order to better understand the processes underlying chromosome diversification in this group of fish.
Figure 2 -
Figure 2 -Somatic metaphases of Characidium gomesi (a and b) and Characidium cf.zebra (c and d) from the Machado River, after silver nitrate (a and c) and CMA 3 staining (b and d).The arrows point NORbearing chromosomes.
The position of the ribosome sites could clearly differentiate both Machado River populations, C. cf.zebra showing Ag-NOR sites in the subterminal position on the long arm of the 23 th small-sized submetacentric pair while C. gomesi presented Ag-NORs at the telomeric position on the long arm of the large submetacentric pair 17.Centofante et al. (2003) suggested that species of Characidium with a ZZ/ZW sex chromosome systems are more closely related among themselves than to species without such systems.This idea was specially based on the fact that only species of Characidium with sex chromosomes presented Ag-NORs in the terminal region of the long arm of a large submetacentric chromosome pair.The Ag-NOR characterization developed in our study showed that the Machado River C. gomesi are the only Characidium species with Ag-NORs on a large submetacentric chromosome and which do not have a sex chromosome system. | 2,832.2 | 2006-01-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
ARE MARKETS ADAPTIVE? EVIDENCE OF PREDICTABILITY AND MARKET EFFICIENCY OF LODGING/RESORT REITs
We investigate the degree of return predictability of lodging/resort real estate investment trusts (REITs) from January 1994 to May 2016. We test the Martingale hypothesis by using linear (automatic portmanteau and automatic variance ratio with rolling windows) and nonlinear tests (generalized spectral shape tests and Dominguez-Lobato consistent tests). Our findings support the Adaptive Market Hypothesis (AMH) and reveal that returns experience periods of both dependence and independence. We document time-varying predictability of lodging/resort REITs with returns as both initially predictable and subsequently unpredictable throughout the majority of the period of analysis. Moreover, we find that if traders use simple technical trading moving average rules, they can capitalize on the inefficiencies of lodging/resort REITs. Finally, we observe that absolute returns and Sharpe ratios of technical moving average rules outperform a simple buy-and-hold strategy.
Introduction
Efficient Market Hypothesis (EMH) in its weak-form implies that current market prices instantaneously and comprehensively reflect historical information (Fama, 1970(Fama, , 1991. In an efficient market, returns are unpredictable and are independent with no autocorrelation. Yen and Lee (2008) chronologically survey and review the literature on EMH and find that it has less empirical support compared to literature from the earlier three decades. Alternatively, Lo's Adaptive Market Hypothesis (AMH), derived from evolutionary psychology, is gaining popularity in financial economics (Lo, 2004(Lo, , 2005. AMH is based on the concept of relative efficiency and is a novel framework that merges EMH with behavioral alternatives by applying the Darwinian principles of evolution − like adaptation, competition, and natural selection − to capital markets. Lo's 2004 study extends the notion of Herbert Simon's "satisficing" with evolutionary characteristics, and finds that market efficiency, along with return predictability, can be dynamic and vary with time (Simon, 1955(Simon, , 1982. Changing market conditions can occasionally cause predictable asset prices. However, market efficiency is not binary (0 or 1) as it evolves with variations in the underlying market fac-tors. For example, regulatory or institutional changes may influence the efficiency of asset markets.
As explained by Oak and Andrew (2003), market efficiency is important to hotel investors, especially during the valuation and appraisal process of hotel properties. In an efficient market, investors have similar and proper access to information. Therefore, prices reflect a market based on shared information, as opposed to a market in which some users may unfairly access historical data to better predict the future. Investors implicitly assume market efficiency by applying valuation methods to the use of hotel data such as occupancy rates, Average daily rate (ADR), capitalization rates, and expectations of supply and demand are caused by economic or market conditions (Oak & Andrew, 2003).
Real estate investment trusts' (REITs hereafter) subsectors behave differently from one another and therefore, deserve separate evaluation (Block, 2012). As a hybrid of both retail and housing, lodging/resorts are unique assets in nature. As of April 2016, lodging/resort REITs have a market capitalization of $41 billion, according to NAREIT. 1 Lodging/resort REITs are viewed as aggressive investments because of their cyclicality and volatile room and occupancy rates. The demand for U.S. lodging is more closely correlated to the U.S. GDP and economy than any other property subsector. U.S. lodging exhibits high cyclic behavior (Wheaton & Rossoff, 1998).
Compared to other types of REIT subsectors, lodging/ resort REITs have the highest volatility and the highest market risk, as shown by Ro and Ziobrowski (2011) and Kim, Mattila, and Gu (2002b). The vast majority of total risk for hotel REITs (about 84%) is uniquely firm-specific, while the remaining percentage of risk arises from market factors (Kim, Gu, & Mattila, 2002a;Gu & Kim, 2003). Both Kim et al. (2002b) and Jackson (2009) find that lodging/ resort REITs underperform other REIT subsectors after adjusting for risk. However, in a later article, Kim, Jackson, and Zhong (2011) demonstrate that lodging REITs have lower volatility compared to stocks, and recommends adding lodging REITs to investment portfolios for diversification purposes. Tang and Jang (2008) and Kim and Jang (2012) find that the profitability and performance of hotel REITs are similar to that of hotel C-corporations, despite their different organizational characteristics and taxation. Payne (2006) find that the lodging subsector has the highest initial response to shocks and innovations, which are transmitted from other REIT subsectors. In a subsequent study, Payne and Waters (2007), show that lodging is the only REIT subsector to exhibit behavior consistent with periodically collapsing bubbles. Lodging REITs have a higher trading volume, both before and after-the financial crisis of 2008, compared to other subsectors (Jain, Robinson, Singh, & Sunderman, 2017). These findings motivate us to dig deeper into research regarding the lodging REIT subsector.
Feng, Price, and Sirmans (2011) provide a comprehensive comparison of equity REIT subsectors, as well as an analysis of their trends and differences from 1993 to 2009. This comparison shows that the lodging subsector has irregular and fluctuating dividend payout ratios during the sample period; lodging has the highest FFO yield and the highest expense ratio compared to the other subsectors studied. In comparison to other real estate market subsectors, lodging also has the lowest ROA, ROE, profit margin, and Tobin's Q. Interestingly, Feng et al. (2011) show an increase in institutional ownership in the lodging subsector, from 21% in 1993 to 58% in 2009.
Recently, there is considerable interest in studying the lodging/resort real estate market. Both the pricing and performance of domestic and international commercial real estate assets have been extensively examined. While an abundance of studies examine commercial property asset classes − like offices, apartments, and retail properties −relatively few studies investigate the lodging/resort property markets. We explicitly test the predictability and efficiency of the less-examined lodging/resort real estate market. It is necessary that we first investigate and identify the market efficiency of the lodging subsector so that we can see how the results of general real estate markets apply to the hotel market.
To conduct our research, we use moving subsample, fixed window intervals in applied linear (automatic variance ratio and automatic portmanteau tests) and nonlinear tests to measure the level of return predictability. Our findings indicate that the extent of market efficiency in lodging/resort REITs changes over time, which supports the implications of the AMH discussed by Lo (2004Lo ( , 2005. We also find clear evidence of nonlinear dependency in lodging/resort REITs. Our study contributes to the finance and lodging literature because it examines the evolving efficiency and predictability of the under-researched lodging/resort REIT subsector explicitly, as explored by Liu (2010) and Manning et al. (2015).
According to the Adaptive Market Hypothesis (AMH), investors occasionally find arbitrage opportunities. Therefore, we apply technical trading rules to lodging/resort REITs and conclude that mispricing and inefficiency generated return opportunities that are greater than transaction costs. We also find that absolute returns and Sharpe ratios of moving average technical rules outperform a naïve buy and hold strategy. Our research reveals the presence of economically exploitable opportunities where economic benefits exceed transaction costs. Our results will interest and assist investors who continually seek arbitrage opportunities and market inefficiencies to create trading strategies to generate abnormal profits.
Literature review
The concept of market efficiency has extensively been used in the pricing of financial securities. EMH defines an efficient market as one in which trading on available information fails to provide an abnormal return (Fama, 1970). Psychologists and behavioral economists question and critique the primary assumption of rational investors, as well as the premise of complete and instantaneous information absorption, and contend that these suppositions do not reflect fundamental human behavior.
However, recent advances in evolutionary psychology and cognitive neurosciences potentially reconcile EMH with behavioral anomalies (Lo, 2007). Simon's 1955 and1982 works developed and popularized the concept of bounded rationality. Bounded rationality postulates that cognitive limitations potentially restrict peoples' decisions. Most often, people act as satisficers and seek a satisfactory solution rather than an optimal and rational answer. Lo (2004) extends Simon's concept of satisficing by using evolutionary dynamics and offers a new framework called the Adaptive Markets Hypothesis (AMH). Brennan and Lo (2011) later create a binary period model for AMH. Several recent studies provide empirical evidence in support of AMH in stock markets (e.g., Kim, Shamsuddin, & Lim, 2011;Urquhart & Hudson, 2013;Ghazani & Araghi, 2014;Almudhaf, 2017), foreign exchange (e.g., Charles, Darné, & Kim, 2012), and precious metals' markets (e.g. Charles, Darné, & Kim, 2015).
Prior studies of market efficiency in real estate literature exhibit few contradictions. Numerous studies test market efficiency in real estate (Gau, 1984(Gau, , 1985Rayburn, Devaney, & Evans, 1987;McIntosh & Henderson, 1989;Case & Shiller, 1989). Gau (1984) documents the presence of weak-form market efficiency in the Canadian residential market, while Hamilton and Schwab (1985) find that the U.S. market is inefficient. Later studies like Shiller (1989, 1990), Wang (2004), and Kummerow and Lun (2005) also find evidence of a lack of weak-form market efficiency in housing markets. Studies that examine market efficiency in commercial property markets also find contradicting results. McIntosh and Henderson (1989) find evidence of market efficiency in office markets, while Barkham and Geltner (1995) and Liu and Mei (1992) find office markets to be informationally inefficient.
Schindler (2011) studies 12 emerging and four developed and securitized real estate markets from 1992 through 2009 and documents that the weak form EMH is not rejected by any in seven of the 12 markets. Cabrera et al. (2011) examine the short-horizon return predictability of the ten largest internationally securitized real estate markets and find evidence of inefficiency in many internationally securitized real estate markets. Su et al. (2012) also find that real estate markets are relatively less efficient compared to stock and bond markets.
In their study, Zhou, and Lee find market efficiency in the REIT market and that the degree of predictability decline over time (Zhou & Lee, 2013). As indicated in Jang and Park (2011), there is an increased interest in finance research that focuses on the hospitality discipline. Consequently, we extend and complement the work of Zhou and Lee (2013), which investigates the AMH of the value-weighted, all-REIT index, using data from CRSP/ Ziman Real Estate between the years 1980 and 2009. They document that market efficiency is not an all-or-none condition but varies continuously over time. They also show that market efficiency is dependent upon market conditions. While Zhou and Lee (2013) examines the overall REIT market, our study specifically addresses these questions for the lodging REIT sector. Similar to their study, we employ the automatic variance ratio test of Choi (1999), and the automatic portmanteau test of Escanciano and Lobato (2009) and find that both their implications for the US REIT market holds for the lodging REIT sector as well. The degree of REIT return predictability is also found to be time varying. Our results are generally similar to that of Zhou and Lee (2013) implying that the lodging/ resort REIT market more or less behaves like the overall US REIT market.
Although many studies have examined the presence of market efficiency in real estate markets (with conflicting results), few have focused on lodging/resort properties. Oak and Andrew (2003) use autocorrelation and cross-correlation analyses to test for weak-form market efficiency in hotel real estate markets. They use the Hospitality Valuation Index (HVI) from 1987 to 1999 and find evidence supporting weak-form efficiency. Bloom (2009) documents a significant difference in the betas of up, flat, and down markets by using the historical beta as the predictor of hotel stocks' performance. Mar-Molinero, Menéndez-Plans, and Orgaz-Guerrero (2017) examine the determinant of beta and uncover that the financial crisis of 2008 affected the factors of systematic risk in the European hospitality industry. Moreover, numerous studies apply the event-study methodology to test for a semi-strong form of market efficiency in hospitality stocks (e.g., Borde, Byrd, & Atkinson, 1999;S. H. Kim, W. G. Kim, & Hancer, 2009;Koh & Lee, 2013;Lee & Connolly, 2010).
This paper contributes to the relatively thin stream of work on market efficiency in the lodging industry. Oak and Andrew (2003) examine the market efficiency of the HVI index to assess the primary hotel asset market and track the hotel value changes. Our study differs from theirs in many ways. Using autocorrelation and cross-correlation analysis, Oak and Andrews (2003) finds evidence supportive of weak-form market efficiency in the hotel market. They also document that buy-and-sell trading strategies using prior returns do not earn higher returns than buyand-hold strategies. They use the Hospitality Valuation Index (HVI) from 1987 to 1999 to test the hypotheses while our study uses the monthly price index of FTSE/ NAREIT lodging/resort REIT subsector. The HVI is based on product market information like room occupancy and rate while we use market price indices reflecting the capital market performance. Since we examine the efficiency of the secondary market by tracking the lodging REITs' index, we provide another dimension to the importance of efficiency from an owner´s perspective. Our study not only seeks to examine the efficiency of hotel REITs market but also extends the time span to the 2000s. In addition to these differences, we also employ robust methods including Wild Bootstrap Automatic Variance Ratio, Automatic Portmanteau Test, Generalized Spectral Shape Test and Dominguez and Lobato (2003) consistent test that allows us to explore the changing market efficiency conditions in comparison to autocorrelation and cross-correlation analyses employed by Oak and Andrews (2003).
Prior studies on efficiency use methods that provided a binary output which would indicate if the market was efficient. We use a dynamic, nonlinear approach to account for Lo's (2004) concept of evolving market efficiency rather than being a binary 0 or 1, which enable us to identify the time-varying efficiency of the market over a long period. Moreover, we also use AMH, an evolutionary concept, to explain the time-varying nature of market efficiency.
Data and methodology
We use the monthly price index of FTSE/NAREIT lodging/resort REIT subsector 2 to complete our analysis. This index is a free-float 3 adjusted market, capitalizationweighted index that includes all tax-qualified REITs listed on the NYSE, AMEX, and the NASDAQ National Market. The weight of each lodging REIT is based on the market cap of that REIT relative to the total market cap of all the listed lodging REITs. In our data, the larger lodging REITs (e.g., Host Hotels & Resorts, Hospitality Properties Trust, and Apple Hospitality REIT Inc.) have more weight in the index than compared to the smaller lodging REITs (e.g., Condor Hospitality Trust, InnSuites Hospitality Trust, and Sotherly Hotels Inc.). Our use of monthly frequency is based on data availability. The number of constituents in the index is not constant through time.
As of May 2016, the components of the index include 20 lodging/resort REITs, for a total market capitalization of $40 billion (Appendix Table A1), and we use all data available since the inception of the index. The data spans from January 1994 to May 2016, and the returns were calculated as the first logarithmic difference of the price index. To address the data snooping bias 4 , we use a fixed-length rolling window of three years for the subsamples, which allows for better detection of short-lived periods of return predictability. We start with the first subsample and implement the tests again, moving forward one month at a time to the next subsample; we repeat this procedure until the end of the sample.
Wild Bootstrap Automatic Variance Ratio (AVR)
The variance ratio estimator is were: ( ) 2 Historical monthly data is available from https://www.reit.com/ sites/default/files/returns/Lodging-Resorts.xls 3 Free-float does not include all REITs outstanding shares. It includes only actively readily available stocks for trading and excludes stocks held by insiders, governments, and locked-in shares. 4 Data-snooping is also known as data fishing or data dredging. When reusing the same data, it is possible that significant results are due to luck and generated by chance and not due to the method used. Choi (1999) developed a data dependent method for optimally choosing L. The variance ratio is estimated with the Quadratic Spectral kernel. We select the truncation point to optimally test the null hypothesis of the absence of serial correlation. The standardized statistic is, We use a wild bootstrap for the automatic variance ratio test to improve the power properties of the test, as suggested in Kim's 2009 study.
Automatic Portmanteau test
We use a Box-Pierce test for the data-driven serial correlation of Escanciano and Lobato (2009). The test automatically selects the order of the autocorrelation tested by selecting whether the Akaike information criterion (AIC) or Bayesian information criterion (BIC) is used to determine the order of autocorrelation. We discover that it is robust to conditional heteroskedasticity. The robustified Portmanteau statistic is where:
Generalized spectral shape test
Escanciano and Velasco (2006) We find that it is not necessary to choose the lag order or to formulate a parametric alternative when using this test.
Dominguez and Lobato (2003) consistent test
Dominguez and Lobato base their 2003 study on the Kolmogorov-Smirnov (KS) and the Cramer-von Mises (CvM) test statistics (Dominguez & Lobato, 2003 where: p is a positive integer (Dominguez & Lobato, 2003). Dominguez and Lobato (2003) obtain the p-value by using a wild bootstrap distribution, which checks for an infinite number of orthogonality conditions. Therefore, there is no need for the user to select the tuning parameters, which is considered an advantage over other methods. Schindler et al. (2010) and Schindler (2011) compare the buy-and-hold strategy to technical trading moving average rules to test the possibility of capitalizing on the inefficiencies of lodging/resort REITs. Investors consider an index moving average that breaks from the top down a 'sell' signal. In contrast, investors regard an index moving average as a 'buy' signal when it breaks from the bottom up. We assume that there was no short selling with a 0.1% transaction cost, per transaction. We use the Sharpe Ratio to control for differences in risk in our comparisons of the buy and hold strategy and the technical trading rules.
Results
We interpret the descriptive statistics in Table 1 as an indication that the monthly returns of lodging/resort REITs were leptokurtic (positive excess kurtosis) with fat tails. We also find that the distribution exhibited negative skewness (left-skewed), which indicates that the frequencies of negative returns are higher than those of positive returns. Our findings of the Doornik-Hansen normality test statistic reject the null hypothesis of the normal distribution of monthly lodging/resort REIT returns, culminating at a significance level of 1%. Institutional investors have been attracted to lodging/resort REITs since the Revenue Reconciliation Act of 1993 because of their diversification benefits and their hedge against inflation. Figure 1 shows an upward trend in the prices of lodging/resort REITs from 1994 to 1998, as well as sharp declines. Additionally, there is a market correction from 1998 to 2000 as well as during the 2007−2008 financial crisis. Events such as geopolitical turmoil, wars, terrorist attacks, economic recessions, and uncertainty undoubtedly influence the performance of lodging/resort REITs. The financial crisis caused the industry to suffer from lower demand and high energy prices. Moreover, the time series plot of lodging/resort REIT returns that we exhibit in Figure 1 shows a phenomenon of volatility clustering (substantial changes, both positive and negative, clustering together); this is similar to Zhou and Lee's 2013 results for overall equity REITs (Zhou & Lee, 2013).
In an efficient market, each successive return is independent; however, this is not what we find in our analysis period. Our study shows that there are years in which the markets are inefficient. We display the automatic variance ratio (AVR) statistic along with a 95% confidence band in Figure 2. The AVR statistic is in the rejection region from 1997 to 2001, and again in 2005, although only for a short time, before reverting to the 95% confidence band. We see an apparent drop in predictability after 2005, which we show in Figure 3. We also see an ultimate fluctuation in the degree of market efficiency; this is similar to Zhou and Lee's results for equity REITs in general (Zhou & Lee, 2013). The horizontal line represents the critical value of test statistic at 5% which equals 3.84.
We display the output of the automatic portmanteau test (AQ) 11 and the oscillatory behavior of both dependency and independency in lodging/resort REIT returns in Figure 3. In these results, we find substantial evidence of predictability during the periods 1998 to 2000, 2003 to 2005, and in 2015. Our AQ statistic indicates that lodging/ resort REIT returns are unpredictable during other periods. Our findings are in line with the AMH in Lo's research (2004Lo's research ( , 2005, as market efficiency varied over time. We plot the p-values of the GSS test statistic from Escanciano and Velasco (2006) in Figure 4. We examine the p-values of the test statistic and infer that the return predictability occurs whenever the p-values are below the broken horizontal line. We identify episodes of statistically significant return predictability when the lodging/resort REIT subsector show non-martingale behavior. The return predictability is significant at the 5% or 10% level as long as it is less than 0.05 or 0.1. We find that lodging REITs deviate from the martingale in 1997, 2000, and from 2004 to 2016, as shown in Figure 4. We report the p-value of the DL test from Dominguez and Lobato (2003) and show extended periods of return predictability in Figure 5. We uncover a non-martingale episode between 1999 and 2016. Our tests reject the martingale difference hypothesis for lodging/resort REITs. (2006) Note: the two horizontal lines correspond to 0.05 and 0.1 Figure 5. Results of the DL test of Dominguez and Lobato (2003) We find that portfolios that apply technical analysis rules outperform the buy-and-hold strategy in both absolute and risk-adjusted terms; this remains true when we compared a buy-and-hold strategy to moving average trading rules, as shown in Table 2. We discover that the Sharpe Ratios of the trading strategies are significantly higher and better than those of the buy-and-hold naïve portfolio. We interpret this as evidence against market efficiency and a random walk. Our results are consistent with Glabadanidis (2014); when we apply moving average rules and we find that lodging/resort REITs has the highest alpha, even when we use multiple factors to adjust for risk. This finding indicates that the lodging/resort market is weak-form inefficient (Glabadanidis, 2014).
Discussion and limitations
In an efficient market, the market value of hotel REITs should reflect and be equal to the value of the sum of its underlying asset holdings (hotels). Markets are considered informationally inefficient if there is a significant deviation between the intrinsic value (fair value) and the market price. Our results are of particular interest to investors who continuously seek market anomalies and arbitrage opportunities to develop trading strategies to yield abnormal profits. We also recommend that investors adopt different investment strategies and plan to address changes based on the levels of market predictability. Investors should not continue using similar asset allocation during all market conditions. Passive investing makes the most sense during periods of market efficiency; mainly when it is difficult to predict the market. On the other hand, market timing is potentially profitable only when markets are temporarily inefficient, enabling investors to exploit informational efficiencies to generate abnormal returns actively.
Profit opportunities can exist as long as lodging REIT markets are both liquid and actively traded. Such opportunities will disappear once they are exploited. However, other possibilities will arise as market participants change, and shifts in the market and regulatory conditions impact the flow of information.
Our study is limited because we only use U.S. lodging/resort indices, so our results should not be generalized and applied to internationally developed or emerging markets. Also, we do not use firm-characteristics of lodging REITs, such as size or institutional ownership, to split our sample and re-examine efficiency differences between different groups of lodging REITs. Other researchers could examine the determinants of market efficiency by using financial characteristics from a sample of lodging REITs. This opportunity is left for future research. Also, the next logical step in this stream of work is providing economic reasoning for the different changing states of market inefficiencies in the loging REITs market.
Conclusions
Prior studies on the efficiency of real estate markets find that different market sectors have varying degrees of market efficiency (Gatzlaff & Tirtiroğlu, 1995). These studies focus on various real estate subsectors, including both the primary and secondary residential and commercial markets, and find contradictory results. The findings in the existent literature on commercial property markets do not provide conclusive evidence of market efficiency. Even though several studies inspected other commercial real estate classes, few studies examine the hospitality real estate market. The literature on the market efficiency of the lodging industry is relatively thin. Oak and Andrew (2003) provide initial evidence of market efficiency in the primary hotel asset market by using the HVI index. We extend their work by examining the secondary lodging market by using the lodging REITs index. In particular, we investigate the dependence of lodging/resort REIT return behavior over time and document the changing levels of market efficiency. Oak and Andrew (2003) do not test for time-varying efficiency, and instead, consider efficiency as a binary condition. Our paper also addresses this gap in research.
Similar to Zhou and Lee (2013) and consistent with the AMH, we document the time-varying nature of return predictability in lodging/resort REITs, from 1994 to 2016, by using both linear (automatic variance ratio and automatic portmanteau) and nonlinear tests (Dominguez-Lobato test and generalized spectral test). Our research provides clear evidence of deviation from the Martingale, and that profit opportunity existed during specific periods. We see a significant nonlinear dependence in lodging/ resort REITs. Our results, regarding the performance of technical trading rules, show that moving average strategies are superior to buy and hold strategies. Even after adjusting for risk, portfolios that use technical rules have higher Sharpe ratios compared to naïve buy-and-hold portfolios. We consider this as evidence against the market efficiency of lodging/resort REITs. However, according to the AMH in Lo (2004Lo ( , 2005, such opportunities exist for only specific periods, and might not remain available Gibbons, Ross, and Shanken (1989) in calculating W statistic, W-modified to test the null hypothesis that: Sharpe Ratio i = Sharpe Ratio j .
* indicates that the trading strategy is superior to the buy-and-hold strategy with 5% significance.
to investors because the efficiency of asset returns remain time-varying in nature. Future research can extend the current strand of literature regarding efficiency by examining individual lodging REITs instead of an index. Since lodging REITs differ in size (market cap), age, institutional ownership, financial characteristics, media coverage, and the number of analysts following the REIT, it can be expected that information dissemination and market reaction may not be similar for all lodging REITs. Similar international studies on developed and emerging lodging markets could add value and assist in a better understanding of this subsector.
Author contributions
FA was responsible for data collection and analysis. RA and JAH were responsible for data interpretation and writing. | 6,300 | 2020-02-17T00:00:00.000 | [
"Economics"
] |
Modeling the strength parameters of agro waste-derived geopolymer concrete using advanced machine intelligence techniques
: The mechanical strength of geopolymer concrete incorporating corncob ash and slag (SCA-GPC) was estimated by means of three distinct AI methods: a support vector machine (SVM), two ensemble methods called bag-ging regressor (BR), and random forest regressor (RFR). The developed models were validated using statistical tests, absolute error assessment, and the coe ffi cient of determination ( R 2 ). The importance of various modeling factors was determined by means of interaction diagrams. When estimating the fl exural strength and compressive strength of SCA-GPC, R 2 values of over 0.85 were measured between the actual and predicted fi ndings using both individual and ensemble AI models. Statistical testing and k -fold analysis for error evaluation revealed that the RFR model outperformed the SVM and BR models in terms of accuracy. As demonstrated by the interaction graphs, the mechanical characteristics of SCA-GPC were found to be extremely responsive to the mix proportions of ground granulated blast furnace slag, fi ne aggregate, and corncob ash. This was the case for all three components. This study demonstrated that highly precise estimations of mechanical properties for SCA-GPC can be made using ensemble AI techniques. Improvements in geopolymer concrete performance can be achieved by the implementation of such practices.
Introduction
The long history of concrete as an essential building material has highlighted the environmental impact of concrete over years [1].With the global demand for cement and concrete expected to triple by 2,050, carbon emissions are projected to increase, and biodiversity is likely to decline at a faster rate than previously anticipated [2].
Due to its large energy and carbon footprint, Portland cement (PC) has been the target of researchers seeking to create alternative binders [2].In the manufacturing process of PC, crucial for concrete binding, approximately 1.80 metric tons of raw materials are utilized, resulting in the emission of 0.8 metric tons of CO 2 [3].Thus, cement output must be mitigated immediately to reduce environmental change [2].One methodical and technical approach to ensuring materials' long-term viability is to recycle them into fresh construction materials from agricultural and industrial waste [4].There are societal, economic, and environmental benefits to producing supplementary cementitious materials from recycled agricultural and industrial waste [5,6].Using recycled materials in place of PC has been proven to be an efficient, affordable, and long-term strategy for reducing one's carbon footprint [7][8][9].
Sustainable concrete, also known as geopolymer concrete (GPC), replaces the PC with recycled agro-industrial resources, making a cementitious binder redundant [10,11].The utilization of alkali hydroxide and alkali silicate appears to be a component of the activation process for raw materials based on the aluminosilicate structure [12].There is a wide variety of reprocessed agronomic and manufacturing materials that have potential as precursors, including fly ash (FAS), red mud (RM), geopolymers (alumino-silicates), rice husk ash (RHA), ground granulated blast furnace slag (GGBFS), silica fume (SF), and metakaolin (MK) [13][14][15][16][17][18].GGBFS in producing GPC presents minimal environmental repercussions alongside favorable cost-effectiveness, heightened rigidity, and exceptional resistance to chemical degradation.Moreover, it holds promise as a key ingredient in eco-friendly and economically viable concrete formulations [19][20][21][22][23].The corncob ash (CCA) component, instead, is novel.More traditional pozzolanic components, including FAS and RHA, can be replaced or supplemented with CCA due to its elevated level of silica.The usage of on-the-spot heated GPC is associated with a number of problems; thus, researchers are considering creating this green concrete at room temperature instead.It is also critical to know that there are other criteria for judging performance outside of reaching strength norms.Evaluating a structure's resistance to environmental and other pressures is essential for accurately estimating its lifespan.GPC is a prospective concrete solution that could be used in ecologically sensitive places because of its improved mechanical capabilities and improved resilience [31].All of the aforementioned factors point to GPC's unique chemical makeup as the source of the material's exceptional mechanical capabilities and endurance [18,24,25].Using nano-silica and reused plastic particles has allowed GPC to perform better in recent years [26][27][28].Waste-based GPC has numerous advantages, as can be seen in Figure 1.
Experts in the fields of science, engineering, research, and computer programming are starting to notice that AI is having a major influence on how new products are developed and enhanced.Problems exist in the engineering industry, and there is a high demand for individuals who can find ways to integrate AI into their jobs.Nevertheless, there are still certain downsides and performance issues with AI-based systems, even though the future seems bright.Artificial intelligence programs have formidable obstacles when it comes to tasks that people typically take for granted, such as object identification and natural conversation understanding [30].This poses a challenge for modern AI in creating appropriate alternatives for training computer perception.AI systems have utilized machine learning (ML) to tackle these issues [30,31].ML algorithms allow computers to gain the necessary expertise for autonomous action by analyzing a sufficiently large dataset [32,33].Getting back to the qualities that make the most explicit data is the first step before putting the plan into action.The term "feature extraction" is now used to describe this procedure.Then, ML is used to train sample data, attributes, and pattern separation instructions [30,34,35].Modern civil engineering research relies on statistical methods and AI to address ever-increasingly complicated issues.Estimating concrete's compressive strength (CS) is a typical use case for these techniques in civil engineering [15,36].The ability to forecast self-compacting concrete's slump and impact strength [37], varied column axial
Better service life
Figure 1: Advantages of waste-derived GPC in construction [29].
bearing [38], and shear behavior of beams in a structure [39], as well as the forecasting of chloride contamination [40] are some of the harder challenges solved utilizing these strategies.These estimates assist in decreasing the number of test configurations for future investigations, shortening their duration and expense.ML approaches such as artificial neural networks, gradient boosting (GB), expression trees (ETs), Gaussian process regression (GPRs), decision trees (DTs), support vector machines (SVMs), and extreme gradient boosting (XGB) may estimate concrete strength [41][42][43].The mechanical properties of GPC were better predicted by the individual and ensemble models than by any of the other models.This study used experimental data and AI algorithms to forecast the mechanical properties of slag and corncob ash-centered geo-polymer concrete (SCA-GPC), a GPC composed of slag and CCA.One standalone ML method and two ensemble ML processes were employed in the study to accomplish its goals.One method to evaluate the models' accuracy involved comparing the predicted and actual outcomes, using statistical tests, and performing K-fold analysis.Carrying out experiments is difficult because of the lengthy and complex procedures involved in collecting materials, casting samples, curing them to increase strength, and evaluating them.Modern modeling techniques like ML can significantly aid the construction industry by overcoming these challenges.Conventional testing methods struggle to assess the overall impact of all parameters on SCA-GPC strength.To identify the most important variables, this study employed interaction graphs.Data necessary for ML approaches can be gathered from existing research.The dataset now has a plethora of potential uses, such as in ML algorithms, impact studies, and material property estimations.Utilizing an experimental dataset, this article validates the efficacy of ensemble ML algorithms in predicting SCA-GPC strength.The study's findings might pave the way for greener construction methods, which would enhance GPC's value to the business.
Collecting and evaluating data
The research employed ML models, including SVM, bagging regressor (BR), and random forest regressor (RFR), to predict the CS and FS of SCA-GPC.The experimental investigation yielded a dataset comprising 260 data points [44].According to the eight input variables (NaOH pellets [SHP], molar concentration [MC), GGBFS, curing day [CD], fine aggregate [FA], water [W], CCA, and concrete grade [CG]), the CS and FS of SCA-GPC were predicted.The data were collected and organized using data preparation.Data preparation for mining data is the standard technique for knowledge discovery from data to minimize major obstacles.Data preparation involves eliminating noise and unnecessary details from the dataset.Descriptive statistics in Table 1 provide a comprehensive summary of key characteristics within the refined dataset, offering valuable insights into its central tendencies, variability, and distribution.These statistics serve as fundamental tools for understanding the dataset's structure, facilitating informed decision-making and hypothesis testing in subsequent analyses.One common way to find parameter dependencies is to use Pearson's correlation coefficient (r) [45].Two Figure 2(a) for CS and 2(b) for FS, show the results of the association map plot for the attributes.The r-squared test is useful for demonstrating parameter dependency and multicollinearity [46].Within the range of −1 to +1, a strong negative relationship is provided by −1, a strong positive link by +1, and no correlation at all by 0 for the r-value [47].This correlation Modeling the strength parameters of agro waste-derived geopolymer concrete 3 between the input variables and the output (CS and FS) is displayed in the bottom row of Pearson's array.
ML modeling
Laboratory studies were used to assess the mechanical properties of SCA-GPC.While CS and FS need ten inputs, the prediction models were built using just eight of the variable inputs.The SCA-GPC's CS and FS were predicted using advanced ML algorithms that included SVM, BR, and RFR.The study achieved its goals by using Python code in the Spyder environment of Anaconda Navigator (version 5.1.5).Typically, ML algorithms are utilized to compare outputs with inputs throughout the process.Researchers allocated 70% of the data for training ML models, reserving the remaining 30% for testing.Additionally, the R 2 value of the predicted outcome served as an indicator of the model's reliability.A low R 2 score signifies a significant deviation between predicted and actual outcomes, highlighting substantial discrepancies in the model's predictive accuracy.This metric serves as a crucial indicator of the model's efficacy in capturing the variance within the dataset, with lower scores suggesting a less accurate representation of the observed data [51].The correctness of the model was validated by a number of analyses, which included statistical examinations and evaluations of errors.A simple graphical representation of an event model is shown in Figure 3, which may be seen hereunder.
Support vector machine
For supervised ML tasks like data regression and classification evaluation, there is the support vector machine (or SVM).SVM classification systems employ diverse categorization strategies aimed at maximizing the separation between different classes to the greatest extent possible within practical constraints.This approach ensures robust classification performance by effectively delineating boundaries between distinct categories in the feature space.For the purpose of depicting the samples, this method makes use of points on a plane or line.The additional instances are arranged in a manner that corresponds to their orientation along the vector, as shown in Figure 4. Figure 5 delineates the systematic approach for implementing SVM models, meticulously designed to deliver a holistic assessment of material strength considering multiple influential factors.This framework empowers users to fine-tune SVM model parameters using sophisticated optimization techniques, thereby augmenting its predictive precision and utility in material strength analysis.
BR
The BE technique is illustrated in Figure 6 The median prediction from several simulations is utilized in regression [54].Twenty separate models are used separately to optimize the SVM-based bagging technique and find its best output.
Random forest
Random split selection, in conjunction with bagged decision trees, allows for the attainment of RFR [56].The assembly and operation of the RFR model are depicted in a simplified diagram, as shown in Figure 7.There is a random selection process for both the training data and the input parameters required to create each branch split in the forest's trees [57].The natural diversity of the tree is enhanced by the presence of this variable.When it comes to the forest, only completely developed binary trees are there.In the realm of universal regression techniques, the RFR method has been demonstrated to be effective.When the amount of variables exceeds the maximum number of possible clarifications, it has been demonstrated that combining the results of a large number of decision trees that were chosen at random yields more accurate results.It is useful for both planned and unplanned learning activities because the significance of its indications shifts significantly throughout the course of time [56].
Model's validation
For the purpose of ensuring that the ML models had an accurate representation of the data, a number of distinct mathematical techniques and k-fold procedures were developed and implemented.The k-fold technique is frequently applied for the purpose of determining whether or not a procedure is considered to be effective.This strategy involves arbitrarily dividing the data set into ten different categories [59].As depicted in Figure 8, ML simulations are trained using nine distinct sets, with only one reserved for validation.ML methods exhibit good performance in scenarios with low error and high R 2 .Additionally, to yield positive outcomes, the treatment needs to be conducted a total of ten times.The precision of the model, which was previously quite excellent, is greatly improved by this procedure.Various ML approaches were also correlated by employing statistical error evaluation metrics like mean absolute percentage error, mean absolute error (MAE), and root mean squared error.
Eqs. ( 1) and ( 3), obtained from previous works, were employed to statistically test the precision of the ML methods' estimates [60,61] ( ) In this context, n stands for the total number of obser- vations, P i refers to the anticipated results, and T i indicates the actual measured values.
Input parameter interaction analysis
Python and Jupyter Notebook 6.4.12 simulated input feature interaction.Matplotlib was used to create interaction graphs.Jupyter Notebook enables users to write and share interactive code, graphs, equations, and text documents online [62].Among the numerous uses for this platform are filtering of data and alterations, mathematical simulation, arithmetical modeling, and data conception, among others [63].For visualizing two-dimensional data arrays, Matplotlib is one of the Python libraries that is used the most frequently [64].Initiating the plot() method, preparing the data, and establishing the required dependencies are all prerequisites to starting.In order to display a plot, the show() method must be used.Matplotlib is a Python library that uses NumPy, an extension for Python used in numerical mathematics [65].It includes a variety of graphs: line, bar, scatter, and histogram.The study utilized scatter plots to visually depict the relationship between the input variables, a method commonly employed in numerous comparable investigations [66,67].
3 Results and analysis 3.1 CS models
CS-SVM model
Figure 9 shows the results of estimating the CS of SCA-GPC using the SVM model.Figure 9(a) graphically illustrates the agreement between the anticipated and observed CS.The dotted lines indicate a 20% deviation from the solid black line, which represents a perfect match with the data.Predictions for CS from the SVM model and the measured values were very close.To effectively determine the CS of SCA-GPC, the SVM technique was utilized.The results displayed a notable level of accuracy, with 83% of predictions falling within the 20% criterion and an R 2 value of 0.8745.Figure 9(b) shows the range of differences (errors) between experimental and predicted values using the SVM method is illustrated.The erroneous values exhibited a standard deviation ranging from 0.09 to 9.50 MPa, with an average of approximately 3.42 MPa.Specifically, there were 17 values below 1 MPa, 20 falling between 1 and 3 MPa, and 41 exceeding 3 MPa.Despite the scattered data, the error distribution suggests that the SVM model can effectively predict the CS of SCA-GPC.
CS-BR model
Figure 10 presents the outcomes of estimating the CS of SCA-GPC using the BR model.Figure 10(a) shows a distinct correlation between observed and predicted CS values, where the solid black line represents an ideal fit, and the dotted lines indicate a deviation of up to 20%.The experimental CS values closely align with the predictions from the BR model.The BR technique demonstrates remarkable performance, achieving an R 2 value of 0.9365 and with 97% of predictions falling within the 20% deviation threshold, indicating significant accuracy enhancement.Figure 10(b) shows the range of errors between experimental and predicted values using the BR technique, with incorrect results exhibiting a standard deviation ranging from 0.07 to 7.42 MPa and an average of 2.19 MPa.The data are categorized into 19 instances below 1 MPa, 27 falling between 1 and 3 MPa, and 22 exceeding 3 MPa.Error distribution analysis indicates that the BR model provides more precise predictions for SCA-GPC CS compared to the SVM model, albeit with slightly narrower variability in measurements.predicted by the RFR model closely resemble those obtained experimentally.The RFR technique exhibits remarkable accuracy in estimating the CS of SCA-GPC, boasting an impressive R 2 value of 0.9688, with 99% of predictions falling below the 20% threshold, as depicted in Figure 11(b).The error range, depicting the differences between experimental and predicted values using the RFR approach, varies with a standard deviation ranging from 0.01 to 11.56 MPa and an average of approximately 1.32 MPa.Further analysis reveals that among the total values, 48 were below 1 MPa, 21 fell between 1 and 3 MPa, and only 9 exceeded 3 MPa.The error distribution underscores the superior accuracy of the RFR model in predicting the CS of SCA-GPC compared to both the SVM and BR models, with significantly reduced error spread.
FS models 3.2.1 FS-SVM model
Figure 12 shows the result of using the SVM model to approximate the FS of SCA-GPC. Figure 12(a) shows the agreement between the expected and observed FS.In terms of FS, the predictions made by the SVM model were pretty comparable to the values that were measured.An effective estimation of the FS of SCA-GPC was achieved through the utilization of the SVM analysis.A high level of accuracy was exhibited by the model, which, similar to the CS-SVM model, had an R 2 value of 0.8853 and had a 100% of its predictions falling below the threshold of 20%.As shown in Figure 12(b), the SVM method's projected values differ from the experimental values by a wide variety of margins.The erroneous results had a standard deviation ranging from 0.003 to 0.763 MPa, with an average of approximately 0.276 MPa.Additionally, the analysis unveiled that 45 of the values were below 0.3 MPa, 27 fell within the range of 0.3-0.5 MPa, and 6 were found to exceed 0.5 MPa.It is clear, after examining the distribution of the errors, that the FS of SCA-GPC may be predicted by applying an SVM model despite the fact that its measurements (errors) are widely dispersed.
FS-BR model
For the purpose of approximating the FS of SCA-GPC, the BR model was utilized, as shown in Figure 13. Figure 13(a) shows the graphic representation of the agreement between the observed and projected FS.The experimental results for FS were very close to the predictions made by the BR model.A remarkable level of accuracy was achieved when the BR approach was used to efficiently identify the FS of SCA-GPC.The method's R 2 value was 0.9293, and all of its predictions fell inside the 20% criterion.Experimental FS (MPa) Predicted FS (MPa) Error
FS-RFR model
For the purpose of approximating the FS of SCA-GPC, the RFR model was utilized, as shown in Figure 14.It is possible to notice a graphical representation of the degree of agreement that exists between the anticipated and observed FS in Figure 14(a).The FS values predicted by the RFR model and those obtained experimentally were very similar.An R 2 value of 0.9753, coupled with all predictions falling within the 20% threshold, highlights significantly improved accuracy in determining the FS of SCA-GPC through the RFR approach.In Figure 14(b), the distribution of errors or discrepancies between the experimental and predicted values using the RFR approach is illustrated.On average, the incorrect readings were around 0.090 MPa, with a standard deviation ranging from 0.002 to 0.936 MPa.Additionally, it was observed that 77 of the values were below 0.3 MPa, there were no values falling between 0.3 and 0.5 MPa, and only 1 value exceeded 0.5 MPa.By looking at the distribution of the errors, it is evident that the prediction of FS of SCA-GPC utilizing the RFR model was expressively more accurate than both the SVM and BR models, with significantly lesser spread measurements (errors).
Validation of models
Table 2 displays the results of Eqs. ( 1)-( 3) applied to the CS and CS-approximation models in terms of the computed errors (MAE), root-mean-square error (RMSE), and mean absolute percentage error (MAPE).The MAEs for CS predictions using SVM, BR, and RFR were 3.420, 2.190, and 1.320 MPa, respectively.SVM, BR, and RFR all improved performance by an average of 11.10%, 6.90%, and 3.90%, respectively, according to the MAPE metric.Moreover, the RMSE values were calculated as 4.194 MPa for SVM, 2.907 MPa for BR, and 2.211 MPa for RFR.Similar trends were observed in the prediction models for flexural strength (FS) regarding MAE, RMSE, and MAPE, as seen in the CS prediction models.These findings indicate that compared to SVM and BR models, the RFR method offers superior accuracy.Table 3 displays the outcomes of computing R 2 , RMSE, and MAE to validate the K-fold approach, while Figure 15 illustrates the K-fold assessments of various ML techniques for predicting CS and FS.The SVM approach
Interaction of input parameters
This section analyses the relationship between the input variables and the final product, CS. Figure 16 illustrate the scatter plots comparing the CS of SCA-GPC with different inputs.The scatter plots are accompanied by bar graphs depicting the frequencies of the input and output components.The GGBFS effect and interaction are illustrated in Figure 16(a), which clearly demonstrates that the mechanical properties of concrete were directly impacted by both inputs.This indicates that the SCA-GPC's strength was linearly proportional to the GGBFS content.The increased silica content in the GGBFS employed in different research may explain the higher quantities of GGBFS [68]. Figure 16(b) shows that the relationship between SCA-mechanical GPC's and CCA's characteristics was indirect.As the CCA concentration increased, the mechanical characteristics of SCA-GPC gradually degraded.Up to 800 kg•m −3 , the strength of FA-input rose as the FA content grew; after that, the strength decreased significantly.Subsequently, when the FA concentration surpassed 850 kg•m −3 , the SCA-GPC once again gained strength. Figure 16(c) shows the relationship between the FA content and the mechanical characteristics of SCA-GPC. Figure 16(d)-(j) illustrate that factors such as CA, W, SHP, SSG, CD, MC, and CG have a minimal impact on concrete strength due to the low variability in the content of these input factors.The outcomes of the interaction analysis were notably influenced by both the raw material utilized and the size of the data sample under examination.Adjusting the input parameters and sampling frequency yields different results.It is important to note that the inputs and database size used to run the algorithms determined the aforementioned results.Using different databases and input factors can result in different outcomes.Further research is needed to enhance understanding of the relationship between the material's components.
Discussions
In this study, the ML models are used to ensure that the predictions are specifically suited to GPC.This situation arises due to the limitation of these models to accept values from a constrained set of eight input variables.Given that all models utilize the same unit measurements and testing technique, it is feasible to depend on the CS and FS predictions generated by any of the models.If there are more than eight parameters in the composite analysis, it is possible that the projected models will not function properly.
If the data used to train these models differ significantly dissimilar to what they are intended to achieve, it is possible that they will not perform as predicted.It is dependent on the degree of consistency or variation in the units of the input parameters as to how accurately the models anticipate the results.For the models to correctly function, it is essential to keep the unit sizes consistent.ML models offer diverse applications within the construction industry, encompassing tasks like material strength forecasting, quality assurance, risk assessment, predictive maintenance, and improving energy efficiency.Nonetheless, these models Modeling the strength parameters of agro waste-derived geopolymer concrete 13 encounter challenges such as reliance on human input, utilization of potentially inaccurate data, and occasional errors in predictions.To overcome these hurdles and optimize MLdriven outcomes, future research avenues could include integrating Internet of Things (IoT) devices, developing hybrid models, adopting explainable AI methodologies, incorporating sustainability considerations, and tailoring data generation and dissemination processes for specific industrial sectors.These advancements in technology have the potential to yield significant advantages for the construction field, facilitating higher levels of efficiency, comprehension, accountability, and well-informed decision-making alongside enhanced safety and project efficacy.The findings of this study could also promote more environmentally responsible building practices in the construction industry, potentially increasing the adoption of GPC.
Conclusions
Using three different ML models, including SVM, BR, and RFR, the purpose of this work was to make a prediction about the mechanical properties of GPC (SCA-GPC) that was made up of slag and CCA.For the purpose of training and verifying the models that were produced, 260 different sets of data pertaining to mechanical characteristics were utilized.These sets included CS and FS.The following are some of the most significant findings that emerged from the research: • The study's conclusion indicates that RFR models exhibited the highest accuracy in predicting the CS and FS of SCA-GPC among the models assessed.The R 2 values for the three ML models (SVM, BR, and RFR) that were created for SCA-GPC's CS and FS prediction were all greater than 0.85.• Models were assessed for efficacy using statistical measures (MAE, RMSE, and MAPE).A more accurate ML model was represented by a lower error value.The lower error rates supported statements that RFR models accurately predicted SCA-GPC's CS and FS.• K-fold analysis (MAE, RMSE, and R 2 ) also validated the RFR model's exceptional precision as paralleled to the commendable precision of SVM and BR models.• The input/output interaction analysis revealed that the most important input parameters that had a stronger correlation with the CS and FS of SCA-GPC were FA, CCA, and GGBFS.
The methodology detailed in this article allows scientists and engineers to effectively assess, enhance, and validate GPC mixture proportioning.Nevertheless, additional research is needed to assemble a broader dataset encompassing a diverse range of strength grades to facilitate the development of prediction models.
, a simplified flow diagram.A comparable ensemble method is the most effective way to describe the steps required to augment the forecast model with additional training data sets.Asymmetric sample statistics are substituted for the original set of statistics.With each new batch of training samples, it is
Figure 11 Figure 9 :
Figure 11 shows the utilization of the RFR model to estimate the CS of SCA-GPC. Figure 11(a) illustrates the agreement between observed and predicted CS values.The CS values
Figure 10 :
Figure 10: (a) Connection between experimental and predicted CS in the CS-BR model and (b) scattering of errors and predicted CS.
Figure 11 :
Figure 11: (a) Connection between experimental and predicted CS in the CS-RFR model and (b) scattering of errors and predicted CS.
Figure 13 (
b) illustrates the range of discrepancies (errors) between the BR-predicted and experimental values.The error values, averaging approximately 0.205 MPa, had a standard deviation ranging from 0.003 to 0.591 MPa.Moreover, 56 of the values were below 0.3 MPa, 18 fell between 0.3 and 0.5 MPa, and 4 exceeded 0.5 MPa.The BR model's FS prediction of SCA-GPC was noticeably more accurate than the SVM model's, with slightly lower spread measurements (errors), as can be seen from the distribution of the errors.
Figure 12 :Figure 13 :
Figure 12: (a) Connection between experimental and predicted FS in the FS-SVM model and (b) scattering of errors and predicted FS.
Figure 14 :
Figure 14: (a) Connection between experimental and predicted FS in the FS-RFR model and (b) scattering of errors and predicted FS.
Table 2 :
Assessment of errors through statistical methods
Table 3 :
Accuracy metrics (RMSE, R 2 , and MAE) obtained from k-fold analysis | 6,424.6 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Environmental Science",
"Computer Science"
] |
Synthesis of a copolymerization and the anti-high temperature property in oil well cement slurry fluid
s. Four monomers including AMPS, acrylic acid (AA), N,N-dimethylacrylamide (DMAA), and sodium allylsulfonate (AS) were used to synthesize quaternary fluid loss reducer (FRW) through copolymerization. The optimal reaction conditions were optimized by orthogonal experiment method, and the water loss reduction performance of FRW was evaluated. The results show that the optimal synthesis conditions for FRW are that the molar ratio of AMPS:AA:DMAA:AS is 4.2:1.8:3.5:0.5, the amount of initiator added is 0.3% of the total monomer mass, the monomer mass percentage is 40%, The reaction temperature was 50°C, and the pH was 7. The performance evaluation results show that FRW has good fluid loss control ability. Under the condition of adding 2%, the fluid loss is 141mL when the temperature is 220℃, which has no negative impact on the comprehensive performance of cement slurry and has strong practicability.
Introduction
The main function of the oil well cement slurry fluid loss additive is to prevent or reduce the rate of the mixing water in the cement slurry from filtering to the formation [1,2]. With the gradual depletion of shallow oil and gas resources, the exploration and development targets have shifted to deeper oil and gas reservoirs [3][4][5]. The cementing process will not only face the problem of high temperature and high pressure, but also may encounter high salinity [6,7]. Stratum of formation water. This poses a new challenge to the performance of oil well fluid loss additives. At present, the fluid loss agent used in my country is mainly a copolymer of 2-acrylamido-2methylpropanesulfonic acid (AMPS) and acrylamide (AM) as the main monomers [8,9,10]. This kind of fluid loss agent has low temperature resistance and salt resistance, and is easy to decompose at high temperature, causing excessive retardation of the slurry, and also reducing its ability to control fluid loss [11,12,13]. In order to solve the above problems, based on the summary of previous research results, four monomers including AMPS, acrylic acid (AA), N,N-dimethylacrylamide (DMAA) and sodium allyl sulfonate (AS) were selected. A quaternary fluid loss agent (FRW) was synthesized through copolymerization, and its structure and performance were tested.
Synthesis method of fluid loss agent
According to a certain molar ratio, weigh 4 kinds of monomers, AMPS, AA, DMAA, AS, etc., and dissolve them in a certain amount of distilled water, then transfer them into a 4-neck flask, and adjust the pH value with 30% sodium hydroxide solution by mass fraction. Blow in nitrogen and heat to the reaction temperature, then add initiator (ammonium persulfate: sodium bisulfite=1:1), keep the temperature constant, stir for a certain period of time, and then obtain the quaternary copolymerization fluid loss agent (FRW). FRW performance test The performance evaluation of fluid loss agent is carried out in accordance with my country's oil and gas industry standard SY/T5504.2-2005 oil well cement admixture evaluation method. The cement slurry formula is: Jiajiang Grade G cement + 35% quartz sand (adding at an experimental temperature greater than or equal to 110°C) + 5% microsilica + 0.75% dispersant (SXY) + x% FRW + x% retarder (BXR-300L) + x% NaCl tap water. The x% in the formula represents the mass percentage of the additive in the cement, and the slurry is formulated according to the water-solid ratio of 0.44.
Effect of monomer ratio on fluid loss reduction performance
FRW performance is mainly determined by the number and ratio of functional groups, so the monomer ratio is one of the key factors that determine FRW performance. According to the literature investigation and preliminary investigation, the following synthesis conditions are used: the initiator is 0.4% of the total monomer mass, the monomer mass fraction is 40%, the experimental temperature is 50°C, the experimental pH value is 7.0, and the reaction time For 4h, 5 groups of different monomer ratio experiments were designed (Table 1). Then test its water loss performance at a FRW addition of 2% and an experimental temperature of 150°C. It can be seen from Table 1 that the FRW samples with the proportions numbered 4 and 5 have low fluid loss and close to each other after adding cement slurry. Although the fluid loss of No. 4 is less than that of No. 5, it is considered that if the content of carboxyl groups in FRW is higher, it will cross-link with Ca 2+ and Al 3+ produced by cement hydration, resulting in increased slurry consistency and thickening. The "wrapped heart" phenomenon will also appear in the experiment [14].
Considering the fluid loss of cement slurry and the properties of the slurry, the optimal monomer ratio relationship is determined to be AMPS: AA: DMAA: AS=4.2: 1.8: 3.5: 0.5.
Optimization of synthesis conditions
On the basis of determining the best monomer unchanged, a set of orthogonal experiments was designed, taking the fluid loss of the cement slurry as the evaluation standard, and considering its influence on the strength and performance of the cement stone, and optimizing the synthesis conditions. The factors of the orthogonal experiment are shown in Table 2. The 24h strength test condition is 21MPa×150℃ in water bath. Through the analysis of the orthogonal experiment results, the optimal synthesis conditions were determined as follows: the initiator is 0.3% of the total monomer mass, the monomer mass percentage is 40%, the reaction temperature is 50°C, the pH value is 7, and the nitrogen flow time is 30min, the reaction time is 5h.
High temperature resistance
One of the most important performance indicators of FRW is high temperature resistance. Therefore, the above formula will be used to test the high temperature resistance of FRW. The addition amount of FRW is 1.0% and 2.0%, and the addition amount of retarder is 2.0%.
The experimental results are shown in Figure 1. It can be seen from Figure 1 that although the fluid loss of cement slurry gradually increases as the temperature increases, the fluid loss is 152 mL when the temperature is 220°C. As the amount of FRW increases, the fluid loss of cement slurry decreases. For example, when the temperature is 220°C and the FRW increases from 1.0% to 2.0%, the fluid loss decreases by 8%.
Salt resistance
Stratums with high salinity often greatly reduce the effectiveness of fluid loss additives. In order to test the salt resistance of FRW, cement slurries were prepared with different concentrations of NaCl solution to investigate the salt resistance of FRW. The addition amount of FRW is 2%, the addition amount of retarder is 2%, and the curing condition of cement stone is 150℃×21MPa×48h water bath curing. The experimental results are shown in Table 4. It can be seen from Table 3 that with the increase of the salt content in the cement slurry, the fluid loss and water separation of the cement slurry increase slightly, but the increase is small, indicating that FRW has good salt resistance. The morphology of C-S-H gel changes, and the strength of cement stone gradually decreases. But in the case of saturated brine, the strength still reaches 20.5MPa, which is very different from the general low strength of saturated brine cement slurry.
The influence of FRW on cement slurry engineering performance
The fluid loss agent not only achieves the function of reducing fluid loss, but also meets other performance requirements of the cement slurry, such as rheology and mechanical properties. The effect of FRW on the engineering performance of cement slurry is studied here. The curing condition of cement stone is 150℃×21MPa×48h. The results are shown in Table 4. It can be seen from Table 4 that as the amount of FRW increases, the fluidity index (n) of the slurry decreases slightly, and the consistency coefficient (K) increases slightly, but overall the value of n is greater than 0.80 and the value of K is less than 0.40Pa•s, indicating that the cement slurry has excellent rheological properties, which is very beneficial for improving the displacement efficiency of the eccentric annulus. The water separation rate of the cement slurry is 0, and the filtration loss is less than 50mL, indicating that the slurry stability is good. The thickening time of cement slurry and the strength of cement stone did not change much with the increase of FRW, indicating that FRW would not decompose at high temperature.
Conclusions
In the experiment, a quaternary high-temperature and saltresistant FRW sample was synthesized by the solution copolymerization method, and the best synthesis conditions were optimized: the molar ratio of AMPS:AA:DMAA:AS was 4.2:1.8:3.5:0.5, and the initiator was added as a monomer. 0.3% of the total mass, 40% of the monomer mass, reaction temperature of 50°C, and pH value of 7. At the same time, FRW has good fluid loss control ability. Under the condition of 2% addition, the fluid loss is 141mL when the temperature is 220 ℃, and it has no negative impact on the comprehensive performance of the cement slurry and is practical. | 2,073.4 | 2022-01-01T00:00:00.000 | [
"Materials Science"
] |
Premartensitic and martensitic phase transitions in ferromagnetic Ni 2 MnGa
We present an experimental study of the premartensitic and martensitic phase transitions in a Ni 2MnGa single crystal by using ultrasonic techniques. The effect of applied magnetic field and uniaxial compressive stress has been investigated. It has been found that they substantially modify the elastic and magnetic behavior of the alloy. These experimental findings are a consequence of magnetoelastic effects. The measured magnetic and vibrational behavior agrees with the predictions of a recently proposed Landau-type model @A. Planes et al., Phys. Rev. Lett.79, 3926 ~1997!# that incorporates a magnetoelastic coupling as a key ingredient. @S0163-1829 ~99!01834-2#
I. INTRODUCTION
An interesting feature of martensitic transitions in shapememory alloys is the existence of precursor phenomena.They are a consequence of weak restoring forces in specific crystallographic directions that announce the possibility of a dynamical instability.Commonly, these systems have a lowlying transverse TA 2 phonon branch together with a low value of the corresponding elastic constant CЈ; both the whole branch and the elastic constant soften with decreasing temperature.Other pretransitional effects are diffuse elastic scattering and phonon anomalies on the low-lying branch at certain wave vectors that are close to the reciprocal lattice vector corresponding to the modulation of the lowtemperature martensitic phase.The prototypical example where these anomalies have been extensively studied is the Ni-Al alloy. 1 Precursor phenomena are expected in second-order phase transitions and are not observed in strongly first-order transitions.The martensitic transition is a first-order transition taking place before the complete softening of a response function; that is, before the system becomes harmonically unstable.It has been proposed 2,3 that this is possible due to the anharmonic coupling between a phonon on the transverse TA 2 branch and the long-wavelength shear mode related to CЈ.This picture has been shown to be suitable for qualitatively describing the martensitic transition in Cu-based alloys. 4mong the systems undergoing martensitic transitions, shape-memory alloys are highly attractive.Recently, there has been increased interest in the study of magnetic alloys exhibiting shape-memory properties. 5,6The coupling between structural and magnetic degrees of freedom opens the possibility of a magnetic control of the shape-memory effect associated with the martensitic structure, which confers to these alloys a potential technological interest.In this paper we study the Ni-Mn-Ga alloy close to the Heusler Ni 2 MnGa stoichiometric composition, which undergoes a martensitic transition in the ferromagnetic state.Recently, large magnetic-field-induced strains have been obtained in this system. 7,8rom a fundamental point of view, peculiar pretransitional phenomena have been reported in Ni-Mn-Ga.Remarkably interesting is the fact that, in a certain composition range, this alloy exhibits a pronounced temperature softening of the ( 1 3 1 3 0) phonon on the transverse TA 2 branch, which condensates at a temperature T I , leading to the appearance of a micromodulated structure preceding the martensitic transition. 9Such a microstructure has been studied by highresolution transmission electron microscopy and has been shown to be the reason of the extra spots observed on the corresponding electron-diffraction pattern. 10The wave vector associated with this modulation is different from that corresponding to the five-or seven-layer modulation, characteristic of the martensitic phases in this alloy system.The premartensitic ͑or intermediate͒ phase transition has been shown to be a first-order transition originated by the magnetoelastic coupling between the magnetization and the anomalous TA 2 phonon. 11,12n this paper, we present a detailed ultrasonic investigation of a Ni-Mn-Ga single crystal with composition very close to the stoichiometric one.We focus on the magnetoelastic properties of this alloy system.
II. EXPERIMENTAL RESULTS
The sample investigated was a single crystal with composition Ni 49.5 Mn 25.4 Ga 25.1 grown by the Bridgman technique.The single crystal was obtained by melting appropriate amounts of single crystals labeled 3 and 6 in Ref. 13; the estimated error in the composition is Ϯ0.5 at.%.From the original rod, a parallepipedical specimen (6.75ϫ11.45ϫ4.8 mm 3 ) with faces parallel to the ͑001͒, ͑110͒, and (11 ¯0) planes was cut.In addition, two platelike small samples (3.2ϫ1.4ϫ1.0 mm 3 ) and (2.8ϫ1.6ϫ0.7 mm 3 ) were cut, with the longer direction along the ͓001͔ and ͓110͔ crystallographic axis, respectively, which were used for magnetization measurements.
The crystal exhibits an ordered L2 1 Heusler structure ͑space group Fm3m) at room temperature.For the investigated composition, the Curie point is at T c ϭ381 K; the intermediate transition takes place at T I ϭ230 K, and the martensitic start temperature is T M ϭ175 K.
The velocity of ultrasonic waves was determined by the pulse-echo method, using the phase-sensitive technique.X-cut and Y-cut quartz transducers with resonant frequencies of 10 MHz were acoustically coupled to the surface of the sample by means of Dow resin 276-V9 in the temperature range 270-320 K, and by Nonaq stopcock grease in the temperature range 200-270 K.The room temperature values found for C L ͑ϭ229 GPa͒ and C 44 ͑ϭ102 GPa͒ are close to those reported for a Ni-Mn-Ga crystal with a slightly different composition. 14The value found for CЈ ͑ϭ22 GPa͒ is larger in our sample.It is important to point out that ultrasonic waves propagating along the ͓110͔ direction with ͓11 ¯0͔ polarization are affected by strong attenuation arising from magnetic domain scattering of ultrasonic waves.As a consequence, determination of the actual velocity for these waves is rather difficult.Such a difficulty is reflected by the difference in the values found from neutron data (vϭ1000 m/s͒ ͑Ref.15͒ and ultrasonic measurements (vϭ740 m/s͒ ͑Ref.14͒ performed on exactly the same sample by different authors.The value found for our crystal (vϭ1600 m/s͒ is slightly larger, and the difference is likely to be due to the different composition between the two samples.
The two shear elastic constants C 44 and CЈ show significant softening at the intermediate phase transition. 11,14,16The softening of CЈ is more pronounced than that of C 44 , thus resulting in a relative increase of the elastic anisotropy at the intermediate phase transition, as illustrated in Fig. 1.Below the intermediate phase transition CЈ increases on cooling.This relative increase in CЈ is larger than that of C 44 and the elastic anisotropy decreases as the sample approaches the martensitic transformation ͑see Fig. 1͒.Such a behavior is contrary to the one exhibited by other shape-memory alloys. 17ith the purpose of investigating the interdependence between elastic and magnetic properties, we have measured the magnetic-field dependence of the elastic constants.For these measurements, the sample was placed between the poles of an electromagnet, and magnetic fields up to 10 kOe were applied along the ͓110͔ and ͓001͔ directions.Prior to each magnetic-field scan, the sample was annealed for 45 min at 520 K ͑well above the Curie temperature͒.Such a heat treatment ensured that the measured dependence of each elastic constant corresponded to the first magnetization process.The results obtained are shown in Fig. 2, where solid and open symbols stand for increasing and decreasing magnetic fields, respectively.All elastic constants increase up to a saturation value with increasing magnetic field.CЈ is the constant that exhibits the largest relative change.The saturation values found for fields applied along ͓001͔ and ͓110͔ are, within the experimental error, coincident.Notice that the value achieved by the elastic constants when the field is removed is slightly larger than the initial one.This irreversible effect is a consequence of the magnetic remanence.
It is also instructive to look at the magnetic-field dependence of the ultrasonic attenuation.It is clear from Fig. 2 that, for all acoustic modes, the ultrasonic attenuation decreases with increasing magnetic field. 18The attenuation of ultrasonic waves in ferromagnetic materials is mostly due to the scattering produced by magnetic domains.Present results show that scattering of ultrasonic waves is smaller when magnetic domains are aligned along the same direction.For the modes corresponding to C L and C 44 , the relative decrease is similar for magnetic fields along ͓001͔ and ͓110͔ directions.For CЈ, the relative decrease for the magnetic field in ͓110͔ direction is larger than for the other modes.
Finally, we have also investigated the dependence of elastic constants upon uniaxial compressive stress.For these measurements, the sample was placed inside a universal tensile machine equipped with compression grips.The machine was equipped with a cryofurnace, which enabled us to conduct measurements at different temperatures.As an example, in Fig. 3 we present the room-temperature dependence found for the three independent elastic constants for uniaxial compressive stresses applied along the ͓001͔ direction.All elastic constants increase with increasing stress, indicating an overall stiffening of the crystal.The increase is clearly nonlinear and seems to reach a saturation value.The relative change in the elastic constants is similar for all of them.The amount of change in CЈ is of the order of that measured in other bcc alloys; however, the change in C L and C 44 is about one or two orders of magnitude larger than the typical changes reported for other bcc alloys. 19he combined temperature and stress dependence of ultrasonic waves provides a convenient way of investigating the stress dependence of the premartensitic transition.With this aim, we have measured the temperature dependence of the shear elastic constant C 44 at different levels of applied uniaxial stress along the ͓001͔ and ͓11 ¯0͔ directions.We have used this shear elastic constant instead of CЈ because of the poor quality of the ultrasonic echoes for the waves asso- ciated with this elastic constant, which in some cases made it difficult to clearly define the position of the minimum in the CЈ vs T curve.In Fig. 4 we show an example of the results found during heating runs at stresses of 0 MPa ͑circles͒, 1 MPa ͑squares͒, and 4.5 MPa ͑triangles͒, applied along the ͓11 ¯0͔ direction.The temperature T I of the minimum of the C 44 vs T curves for different stress levels along the ͓001͔ and ͓11 ¯0͔ directions is plotted in Fig. 5.It is clear that the application of a compressive stress decreases the temperature of the forward transition and increases that of the reverse one.Therefore, the premartensitic transition under applied stress occurs with thermal hysteresis.In spite of these similarities, the premartensitic behavior of Ni 2 MnGa turns out to be quite different.The important differences are ͑i͒ the phonon softening in Ni 2 MnGa is more pronounced; and ͑ii͒ for Ni-Al, all elastic constants stiffen with reducing temperature, with the exception of CЈ, which decreases monotonously down to the martensitic transition temperature. 21 the ferromagnetic order exhibited by this alloy. 11Such a possibility was first ruled out by Zheludev et al. 9,15 These authors based their assertion on the fact that they observed a wiggle on the TA 2 phonon branch at ϭ 1 3 at a temperature slightly above the Curie point.However, very recently, Stuhr et al. 22 have performed neutron scattering experiments over a broad temperature range covering the ferromagnetic and paramagnetic phases of a Ni 51.5 Mn 23.6 Ga 24.9 crystal ͑this crystal did not exhibit any transition to the intermediate phase͒.They have found that the temperature dependence of the energy of the soft phonon changes significantly at the Curie point.Such a change indicates that the phonon softening depends on the magnetic ordering in the sample.Interestingly, the phonon softening in the paramagnetic phase (Ӎ0.019 meV 2 /K) is similar to that of Ni-Al (Ӎ0.016 meV 2 /K). 1 This result shows that when the sample becomes magnetically ordered, the softening is enhanced as a consequence of the interaction between the magnetization and the phonon energy.
For Ni-Mn-Ga alloys close to the stoichiometric composition, the soft phonon can freeze at a given temperature (T I ).This freezing gives rise to the development through a first-order phase transition of a micromodulated structure that can easily be detected by the narrowing in the peak width and a remarkable increase in the integrated intensity of the diffraction peaks at ( 1 3 1 3 0).Below T I , the energy of the ( 1 3 1 3 0) TA 2 phonon increases with further decreasing temperature. 9A recent Landau-type model has shown 12 that the occurrence of this first-order phase transition must be ascribed to the existence of a magnetoelastic coupling.Notwithstanding, the transition from the L2 1 structure towards the micromodulated structure has not been observed in all Ni-Mn-Ga samples investigated by different authors.In order to clarify this point it is interesting to collect data for the different transition temperatures from the literature for different samples with compositions around the stoichiometric one.We have observed that in the range of compositions close to the stoichiometric ͑hatched region in the inset of Fig. 6͒, these data can be compiled in a compact representation by plotting the different transition temperatures as functions of a parameter (␣) obtained as a weighted composition (␣ ϭx at.% Gaϩy at.% Mn composition, with xϩyϭ1; it can easily be related to the electron per atom ratio 23 ͒.We have found ͑Fig.6͒ that the x,y values that give the best representation are xϭ0.6 and yϭ0.4.The diagram shown in Fig. 6 delimits the four principal phases: L2 1 paramagnetic, paramagnetic martensite, L2 1 ferromagnetic, and ferromagnetic martensite.The intermediate phase has only been observed in alloy systems in the ferromagnetic L2 1 phase, for which the martensitic transformation temperature is far enough from the Curie point ͑dashed line in the diagram͒.This finding is probably an indication that the magnetoelastic interaction is sufficiently strong to drive the system through the transition towards the intermediate phase.Actually, this would be in agreement with the model presented in Ref. 12, which shows the necessity of a large enough interaction in order to drive the system through the intermediate first-order phase transition.The temperatures of the premartensitic and martensitic transition become closer each to other with decreasing ␣.As a consequence, in a certain composition range, the martensitic transition can mask ͑or even inhibit͒ the existence of the intermediate phase.It is also interesting to notice, that changes in the modulation of the martensitic structure have been reported with decreasing temperature 24 in systems that do not exhibit a premartensitic transition ͑open diamond in Fig. 6͒.They could be a reminiscence of the intermediate transition in the L2 1 phase.
The set of experimental evidences undoubtfully state the existence of a magnetoelastic coupling in Ni 2 MnGa alloys.All elastic constants increase with magnetic field.The relative change in the elastic constants correlates with the square of the magnetization. 25This is consistent with the bilinear coupling between the magnetization and the homogeneous shear proposed in Ref. 12 and 26.It is worth remarking that such an M 2 dependence of the magnetoelastic energy has also been proposed by other authors 27 in order to account for their experimental observations.
It is acknowledged that the application of a uniaxial stress increases the martensitic transition temperature.For Ni 2 MnGa, a dependence of ϳ2.5 MPa/K for uniaxial stresses along the ͓001͔ crystallographic direction has been reported. 28In our investigations, we have restricted our stress range in order to ensure that the temperature increase of the martensitic transition does not interfere with the intermediate transition.We have found that the application of uniaxial stresses modifies the characteristics of the intermediate phase transition: when the sample is subjected to a mechanical stress, the transition occurs with a certain thermal hysteresis.Dynamic mechanical tests 10 and neutron-scattering experiments under uniaxial stress 29 reported the existence of thermal hysteresis at the intermediate phase transition, although no detectable thermal hysteresis was observed by using other experimental techniques. 11Present results confirm the fact the application of mechanical stresses results in a modification of the kinetic characteristics of the phase transition.Moreover, it has theoretically been shown 30 that the effect of applying a mechanical stress is an enhancement of the firstorder character of the intermediate transition.
All elastic constants exhibit an anomalous stress dependence ͑Fig.3͒: the measured increase is not linear, and the relative change is larger than that expected from purely vibrational anharmonic contributions.We argue that such an anomalous behavior could also be related to the magnetoelastic interaction.That is, the application of a uniaxial stress may induce rotation of magnetic domains, resulting in a change in the magnetization.This effect would lead to a modification of the values of elastic constants.This argument is consistent with the experimental finding that the relative change of elastic constants with hydrostatic pressure has been found 31 to be in the usual range for cubic alloys.Measurements of magnetization on samples subjected to controlled uniaxial stresses could provide experimental justification for this hypothesis.
The magnetic-field dependence of the structural transitions has been investigated by several authors. 7,12,27or polycrystalline samples 27 no magnetic-field dependence has been found for the martensitic transition temperature, and the premartensitic transition temperature was not changed by fields less than 0.8 kOe but it was found to decrease for higher fields.We recently investigated more accurately the premartensitic transition at very low magnetic fields by means of an ac susceptometer for fields applied along the ͓001͔ direction. 12These measurements have now been extended to magnetic fields along the ͓110͔ direction and the same behavior is obtained: a monotonous decrease of the intermediate transition temperature with increasing magnetic field.Since we have not found any significant dependence upon the direction of application of the magnetic field, a similar behavior is expected for polycrystalline samples.On the other hand, for the martensitic transition temperature, Ullakko et al. 7 reported a decrease of ϳ2 K from strain vs temperature curves recorded at 0 and 10 kOe.This result is not consistent with the measurements by Zuo et al. 27 With the aim of making an estimation of the temperature change with magnetic field based upon thermodynamic data using the Clausius-Clapeyron equation, we have used recent values for the entropy change at this transition, 12 and have measured the temperature dependence of the saturated magnetization, shown in Fig. 7.An increase in M (⌬M Ӎ130 emu/mol and ⌬M Ӎ70 emu/mol for fields along the ͓001͔ and ͓110͔ directions, respectively͒ is observed at the martensitic transition.These data render a maximum change in the martensitic transition temperature dT/dHϳ2ϫ10 Ϫ2 K/kOe.This value is consistent with the results reported by Zuo et al.: 27 an increase of ϳ1 K will fall within the experimental errors.Although the results by Ullakko et al. 7 are not consistent with the Clausius-Clapeyron predictions, it must be taken into account that nucleation effects may play a relevant role in determining the actual transition temperature of a given sample.
IV. SUMMARY AND CONCLUDING REMARKS
We have performed an experimental investigation of the premartensitic and martensitic transition in a Ni-Mn-Ga single crystal.The main results outcoming from this investigation are as follows.
͑1͒ All elastic constants and ultrasonic attenuation show a significant dependence upon the magnetic field.
͑2͒ The elastic constants show an unusual dependence upon the applied stress which cannot be accounted for by purely anharmonic vibrational theories.͑3͒ The application of uniaxial stress results in a modification of the premartensitic transition: the transition takes place with thermal hysteresis.
͑4͒ By making use of the measured values of ⌬M and ⌬S, it has been proven that the Clausius-Clapeyron equation predicts a change in the martensitic transition temperature with magnetic field around 2ϫ10 Ϫ2 K/kOe.
͑5͒ The premartensitic transition temperature decreases with application of magnetic field even at low ͑0-20 Oe͒ magnetic fields.This behavior does not depend upon the direction of the applied field.
Present results undoubtfully state the existence of a magnetoelastic coupling in this alloy.Such a magnetoelastic coupling is responsible for the first-order phase transition from the L2 1 towards a micromodulated ͑intermediate͒ phase.
It has been customary to compare the lattice dynamical behavior of Ni 2 MnGa to that exhibited by the Ni-Al shapememory alloy.In both alloys pronounced temperature softening of TA 2 phonons ͓close to ( for Ni-Al ͑Ref.20͔͒, accompanied by a ''central peak'' has been reported.
FIG.2.Relative change in the elastic constants and change in the corresponding relative ultrasonic attenuation, for dc magnetic fields applied along the ͓001͔ ͑a͒, and ͓110͔ ͑b͒ directions.Solid symbols correspond to increasing magnetic field and open symbols, to decreasing magnetic field.
FIG. 3 .
FIG. 3. Relative change in the natural velocity of ultrasonic waves associated with the elastic constants C L ͑triangles͒, C 44 ͑squares͒, and CЈ ͑circles͒, for uniaxial stresses applied along the ͓001͔ direction.
FIG. 6 .
FIG. 6. Martensitic ͑solid circles͒, Curie ͑solid up triangles͒, and intermediate ͑open down triangles͒ transition temperatures as a function of a weighted composition parameter.Most data have been collected from Refs. 8 and 13.The open diamond corresponds to the temperature of the change in the modulation of the martensitic phase ͑Ref.24͒.The hatched region in the inset shows the composition range from which data have been taken for this compact representation. | 4,765.6 | 1999-09-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Comparing the Forecast Performance of Advanced Statistical and Machine Learning Techniques Using Huge Big Data: Evidence from Monte Carlo Experiments
+is research compares factor models based on principal component analysis (PCA) and partial least squares (PLS) with Autometrics, elastic smoothly clipped absolute deviation (E-SCAD), and minimax concave penalty (MCP) under different simulated schemes like multicollinearity, heteroscedasticity, and autocorrelation. +e comparison is made with varying sample size and covariates. We found that in the presence of low and moderate multicollinearity, MCP often produces superior forecasts in contrast to small sample case, whereas E-SCAD remains better. In the case of highmulticollinearity, the PLS-based factor model remained dominant, but asymptotically the prediction accuracy of E-SCAD significantly enhances compared to other methods. Under heteroscedasticity, MCP performs very well and most of the time beats the rival methods. In some circumstances under large samples, Autometrics provides a similar forecast as MCP. In the presence of low and moderate autocorrelation, MCP shows outstanding forecasting performance except for the small sample case, whereas E-SCAD produces a remarkable forecast. In the case of extreme autocorrelation, E-SCAD outperforms the rival techniques under both the small and medium samples, but further augmentation in sample size enables MCP forecast more accurate comparatively. To compare the predictive ability of all methods, we split the data into two halves (i.e., data over 1973–2007 as training data and data over 2008–2020 as testing data). Based on the root mean square error and mean absolute error, the PLS-based factor model outperforms the competitor models in terms of forecasting performance.
Introduction
e prediction of macroeconomic variables is very important under macroeconomic studies, monetary policy analysis, and environmental economics. Accurate forecasts induce sound insights into mechanisms of dynamic economies [1], more effective monetary policies [2], and better portfolio management and hedging strategies [3]. In the data-rich environment existing these days, many macroeconomic series are tracked by economists and decision-makers.
Low-dimensional models often include some prespecified economic covariates for instance vector autoregression and therefore have a complication in capturing the dynamic and complex patterns, which contain huge panels of time series [4]. It is a fact that missing important variable(s) leads to an underspecified model, inducing biased results. ere is an intense need to propose updated statistical models and analysis frameworks with the purpose of expanding the low-dimensional counterparts for improved forecasts. us, in the recent era, the analysis of "Big Data" has become the core of economics research. is in turn has resulted in special attention being paid to the huge class of techniques that are available in the domain of machine learning, dimension reduction, and penalized regression [5,6]. Recently, in the regression context, Doornik and Hendry [7] categorized Big Data into three classes: tall big data, huge big data, and fat big data. Each type can be defined as follows: (i) Tall big data: more observations and several covariates (N >> P) (ii) Huge big data: more observations and more covariates (N > P) (iii) Fat big data: fewer observations and more covariates (N < P) where N and P represent the number of observations and covariates, respectively. We graphically represent the Big Data in Figure 1.
Moreover, Stock and Watson [17] elaborately discussed the past studies on the utility of factor models forecasting.
ere is an intensive and growing body of literature in this area. Few of them are relevant, as they address both theoretical and empirical problems, including Armah and Swanson [12,13]; Artis et al. [8]; Bai and Ng [1,33,34], Banerjee and Marcellino [35]; Boivin and Ng [9,10], Ding and Hwang [36]; Dufour and Stevanovic [37]; Stock and Watson [15][16][17][18]; and Smeekes and Wijler [38]. e abovementioned papers consider principal component analysis, independent component analysis, and sparse principal component analysis for the construction of the factor model. However, there is also a small and growing body of literature investigating the classical approach (Autometrics) in the context of macroeconomic forecasting [7,21,22]. We failed to discover any paper to date that has investigated the use of partial least squares (PLS) theoretically in our context. However, the method has been applied empirically in various fields. Apart from this, some papers have utilized shrinkage methods like ridge regression, lasso, elastic net, adaptive lasso, and nonnegative garrote, but none of the papers to date have used the updated forms of shrinkage methods in our context.
Filling the gaps, this work implements some updated techniques of big data to increment literature of macroeconomic forecasting theoretically as well as empirically. From the dimension reduction aspect, we build factor models intending to highlight the importance of such models for macroeconomic prediction. Particularly, while building factor models, we employ principal component analysis (PCA) and partial least squares (PLS). In addition, we also assess the last version of the classical approach (Autometrics) and the updated version of shrinkage methods including elastic smoothly clipped absolute deviation (E-SCAD) and minimax concave penalty (MCP). We evaluate the performance of these techniques in a simulation setting where the true data generating process (DGP) of the factor model is used. To summarize the whole discussion, our prime contribution comes in the form of comparison of updated shrinkage methods and Autometrics with factor models through forecasting under the simulated scenarios having multicollinearity, heteroscedasticity, and autocorrelation along with application to macroeconomic data to provide a conclusive solution to predictability. e study aims to produce an improved method to help policymakers; the improved tool is not restricted to workers' remittances or the stock market (in our case) but is valid for any time series. e remaining part of the paper is organized as follows. In Section 2, we provide a detailed discussion regarding factor models based on principal component analysis and partial least squares. In Section 3, we discuss big data techniques, such as the classical approach and shrinkage methods. Monte Carlo evidence on the comparative performance of several forecasting techniques is discussed in Section 4. Empirical findings are given in Section 5. Section 6 provides concluding remarks.
Methods
e techniques we intend to apply in subsequent sections are reported in Figure 2.
is study aims to compare the predictive ability of factor models based on principal component analysis and partial least squares with Autometrics, elastic smoothly clipped absolute deviation (E-SCAD), and minimax concave penalty under different scenarios like multicollinearity, heteroscedasticity, and autocorrelation. Macroeconomic and financial datasets are used for the analysis of the real phenomenon.
Factor Models.
e notion of factor models also called diffusion index entails the utility of properly extracted hidden common factors that have been distilled from a huge set of features as inputs in the identification of the parsimonious models. To be more specific, let X be an N × P dimensional matrix of data points and define N × k dimensional matrix of latent factors.
Stock and Watson [17] have delineated in depth the literature regarding forecasting through factor models. In 2 Complexity the below detailed discussion of factor model methodology, we follow Stock and Watson [15]: where ε represent the random error matrix, φ ′ is the P × k coefficients matrix, and F is a factor matrix of N × k dimension.
We construct the following forecasting model based on the work of Bai and Ng [39], Kim and Swanson [19], and Stock and Watson [15]: where Y t+h is an outcome variable to be forecasted, h shows the forecast horizon, and F t is the vector of factors with a dimension, distilled from F in equation (1). e associated coefficient c F is a vector of unknown parameters, and e t+h is the random error. e whole process of factor model forecasting consists of two steps: in the first step, we estimate k latent (unobserved) factors, represented by F, from P observable predictors. To gain convenient dimension reduction, k is supposed to be much smaller than P (i.e., k ≪ P). In the second step, we estimate c F , by utilizing data at hand with Y t and F t . Subsequently, an out-of-sample forecast is constructed. Kim and Swanson [19] utilized the PCA approach to achieve estimates of the unobserved factors, known as principal components (PCs). e latent PCs are uncorrelated which are obtained by using the data projection in the direction of maximal variance, and naturally, the PCs are ordered based on their variance contributions. e first PC reflects the direction of the maximal variance in the data, the second PC reflects the direction that explains the maximal variance in the rest of the orthogonal subspace, and so on.
is approach is most frequently used in the literature of factor analysis because PCs are easily derived via the use of singular value decompositions [15,33,34].
Boivin and Ng [10], however, argued that the performance of the factor model is more likely to be worse in prediction if the incorporated factors are dominated by excluding factors. Similarly, Tu and Lee [26] stated that PCA imposes only the factor structure for X and does not consider the outcome variable. It indicates that PCA ignores the dependent variable while performing it. By dint of neglecting the outcome variable at the time of factors, extraction induces an inefficient forecast of the outcome variable. e solution to this problem is given in the next section.
e Partial Least Squares (PLS) Method.
is study looks at another method that is known as partial least squares (PLS) regression developed by Wold [40]. is method is appropriate in a data-rich environment and may be considered as an alternative to PCA-based factor models. Unlike the PCA method, the PLS identifies new factors in a supervised way; that is, it makes use of the response variable to identify new factors that not only approximate the old factors well but are also related to the response variable. Roughly speaking, the PLS approach attempts to find the directions of maximum variance that help in explaining both the response variable and explanatory variables. e PLS for an outcome variable is motivated by a statistical model as follows: where . ., T, c P is an n × 1 vector of associated coefficients, and e t is the disturbance term. Kim and Ko [29] argued that PLS models are useful especially when there are a large number of covariates. Instead of using a model given in (3), one may adopt another data dimension reduction approach through the following linear regression with Z × 1 vector of components s t � [s 1,t , s 2,t , . . . , s Z,t ] as follows: We define s t : where w � [x 1 , x 2 , . . ., x Z ] is the n × Z matrix of each column, w z � [w 1,z , w 2,z , . . . , w n,z ], z � 1, 2, . . ., Z, denote the vector of weights on covariates for z factors or components, and τ is the Z × 1 vector of PLS coefficients. We may use the following equation for predicting the k steps ahead model; that is, y t+k , k � 1, 2, . . ., m.
Classical Approach and Shrinkage Methods
e fundamental comparison of interest here is between automatic selection over variables as against PC and PLSbased factors in terms of prediction. Factors are often regarded as essential to summarize a large amount of information, but the classical approach and shrinkage methods are alternatives.
Classical Approach.
Autometrics is a well-known big data algorithm, which consists of five steps. In the first step, Complexity we begin the process with the construction of a linear model, which refers to the General Unrestricted Model (GUM); in the second step, we obtain the estimates for unknown parameters and test them statistically; the third step entails presearch process; step four delivers the tree-path search; and the last step leads to a selection of the final model. Doornik [41] elaborately delineated the complete algorithm.
e key notion is to commence modeling with a linear model that incorporates all candidate features (GUM). Estimate the GUM by the least squares method and then carry out the statistical tests to validate the congruency of the model. If the estimated GUM contains statistically insignificant coefficients at prespecified criteria, then again estimate the simpler models by utilizing different paths search and ratified by diagnostic tests. As some terminal models are detected, Autometrics undertakes their union testing. e rejected models are discarded, and the union of those terminal models who survived leads to a new GUM for another tree-path search iteration. e whole inspection process proceeds, and the terminal models are statistically checked against their union. If two or more terminal models clear the encompassing tests, then the preselected information criterion decides about the final choice.
e econometric models are achieved by applying Autometrics on the GUM: Under Autometrics, two main strategies are commonly used for model selection, a conservative and a superconservative also called Liberal strategy. Our study implements the Liberal strategy, which is typically based on a one percent significance level rather than five percent. In other words, the statistical significance of each estimated coefficient is based on one percent level of significance.
Shrinkage
Methods. An alternative prominent approach to deal with many features is the family of panelized regression methods, which comprises of many techniques, but our study adopts the following updated forms: elastic smoothly clipped absolute deviation and minimax concave penalty.
Elastic Smoothly Clipped Absolute Deviation. Fan and
Li [42] added a new penalization technique to literature known as SCAD. e technique is nonconvex and enjoys an oracle property: sparsity, continuity, and unbiasedness. is technique selects useful covariates with their magnitudes asymptotically in an efficient way if the underlying true model is known (i.e., the oracle properties).
e SCAD function covers all the limitations faced by the existing methods like ridge and lasso. e penalty function of SCAD is defined as follows: e unknown tuning parameter k was determined by the generalized cross-validation approach, and they assumed the value of c is 3.7. As given above, the penalty function is continuous, and the resulting solution is given by e tuning parameters can be induced from the datadriven technique. e limitation of SCAD is that it selects only one variable from a correlated set of predictors. Zeng and Xie [43] extended the SCAD by augmenting L 2 penalty and called it elastic SCAD (E-SCAD). Mathematically, it can be written as Due to L 2 penalty, the E-SCAD achieves an additional property along with oracle properties; that is, the penalty function should spur highly correlated features to be in or out of the model simultaneously. Hence, the proposed form selects the whole group of correlated predictors rather than one variable.
Minimax
Concave Penalty. Zhang [44] proposed a minimax concave penalty (MCP), which yields the convexity of the penalized loss in sparse regions considerably given specific thresholds for features selection as well as unbiasedness. e MCP is described as follows: e tuning parameter (c > 0) diminishes the maximum concavity under the following restrictions like unbiasedness and selection of features: e dual-tuning parameters in concave penalty regression play a key role in terms of controlling the amount of regularization. Likewise, the concavity of the MCP penalty considerably evades the sparse convexity by dint of diminishing the maximal concavity. In 2010, the author showed that a rise in regularization parameter value leads to bearing more convexity and achieves an almost unbiased penalty. e penalty function of MCP typically belongs to the quadratic spline function.
Monte Carlo Evidence on Forecasting Performance
Our simulation part consists of three main scenarios, namely, simulations on a data generating process (DGP) with (i) multicollinearity, (ii) heteroscedasticity, and (iii) autocorrelation. In each simulated scenario, varying the DGP attributes in terms of correlation strength among features, the magnitude of the variance of the error term, and the magnitude of correlation of error term with previous values (lag).
Data Generating Process.
We generate data from the following equation: e set of predictors X 1 , X 2 , . . ., X P are generated from multivariate normal distribution as X i ∼N (0, Σ). e same data generating process (DGP) was used by [38] as mentioned in (13) for artificial data generation. Our study considers three types of sample sizes for the simulation experiments. We suppose a dual set of features with altering the number of active (p) and inactive features (q), respectively, as portrayed in Figure 3.
In our simulation experiments, we assume three scenarios as follows: in the first scenario: we generate the pairwise correlation between the predictors (i.e., x m and x n as cov(x m , x n ) � |m− n| ). e population covariance matrix is produced in the following way: While altering the parameter Σ, we obtain different correlation structures. In our work, we assume values for Σ ∈ {0.25, 0.5, 0.9} as followed by Xiao and Xu [45]. In the second scenario, we generate the correlation between current and residuals lag (autocorrelation) and symbolized by ρ. e autocorrelation is generated as follows: Our experiments assume the low, moderate, and high cases of autocorrelation, such as ρ ∈ {0.25, 0.5, 0.9}. e third scenario is for examining heteroscedasticity (i.e., means that the variance of the error term is not constant and alters across data points by σ k ).
To evaluate the forecasting performance of all methods, we divide each realization such that 80 percent of the data are used to train the models and the remaining data are utilized for models' evaluation followed by [46]. e entire process will be replicated M � 1000 times. e average of root mean square (RMSE) and mean absolute error (MAE) are computed over "M" to assess the forecast performance. e smaller the values of RMSE and MAE, the closer the predicted values to the actual values and the better the forecast relatively. For analysis, we have relied on several packages like gets, glmnet, ncvreg, pls, caret, forecast, and Metrics under R programming language.
Simulation Results.
e forecast comparison results derived from Monte Carlo experiments are presented in Tables 1-3. All methods are improving their performance by augmenting the number of observations. Increasing the number of irrelevant and candidate variables adversely affects the predictive ability.
Scenario 1.
In the presence of low and moderate multicollinearity, the performance of MCP is superior to other rival methods except for the case of a small sample, where E-SCAD and PLS-based factor models are dominant. To be more specific, in the presence of low and moderate multicollinearity, E-SCAD often produced better forecasts. As we consider the case of high multicollinearity, the PLS-based factor model is superior in particular, while asymptotically E-SCAD outperformed the other methods. Scenario 2. In the presence of all schemes of heteroscedasticity, the performance of MCP is often better than all competitor models. When the number of predictors is equal to 50, Autometrics provides a similar forecast as MCP in large samples. Scenario 3. In the presence of low and moderate autocorrelation, the MCP showed an outstanding performance in terms of forecasting particularly when we increase the sample size. In contrast, when n � 100, the E-SCAD produced a remarkable forecast. In the case of Complexity extreme autocorrelation, E-SCAD outperformed the rival techniques under both small and moderate samples, but as we further augment the sample equal to 400, the MCP induced a more accurate forecast comparatively.
Real Data Analysis
After Monte Carlo experiments, this study performs real data analysis using big data. For real data analysis, we focus on two datasets: macroeconomic data and financial markets. In the context of both datasets, the study considers worker's remittances inflow and stock market data, respectively. It is a fact that many factors influence the worker's remittances inflow and the stock market. Among them, some covariates are recommended by economic and financial theories to be included in the model. Apart from this, a long list of variables has been recommended by past studies. is study considers all the possible determinants based on theories and literature as well to make a general model. In econometrics literature, such a model is known as the general unrestricted model (GUM).
Data Source.
is study collects the annual data for Pakistan from 1973 to 2020. e data is sourced from the World Development Indicators (WDI), International Table 4.
Correlation Matrix.
For empirical analysis, we split the data set into parts: observations from 1973 to 2007 are utilized to train the models and the remaining data are used to evaluate their forecasting performance. But before going to compute the forecast error, we discover the correlation structure among covariates through the visualization approach. In Figures 4 and 5, blue and red colors exhibit positive and negative correlations, respectively. e colors' severity and the area of the circle are directly associated with correlation coefficients. On the right side of the correlogram, the legend color shows the correlation coefficients and the corresponding colors. We can observe that there are many dark color circles in blue and red, which clearly illustrate the high pairwise correlation. In other words, we can conclude that there exists high multicollinearity among predictors under both datasets. Figure 6 reveals that the distribution of stock market data is almost symmetric. Apart from this, 8 Complexity diagnostic tests revealed that the residuals of an estimated model are independently and identically distributed. As we have noted in simulation experiments that in presence of high multicollinearity, the PLS-based factor model outperformed the other methods in terms of forecast error particularly when the sample size is small. It reveals that PLS-based factor is more robust in such circumstances.
Forecast Comparison Based on Two Real Datasets.
Root mean square error and mean absolute error are computed to ascertain the predictive ability of MCP, E-SCAD, Autometrics, and factor models based on PCA and PLS in Figures 7 and 8, respectively. e findings show that PLS-based factor model outperformed the rival methods in the out-of-sample forecast. It illustrates that PLS-based factor model has good predictive power than other competitor models, in terms of having the lowest forecast errors in multistep ahead forecast for the period (2008 to 2020). It supports the simulation results under both real datasets.
Concluding Remarks
is study compares factor models based on principal component analysis and partial least squares with classical approach (Autometrics) as well as shrinkage procedures (i.e., minimax concave penalty (MCP) and elastic smoothly clipped absolute deviation (E-SCAD)). e comparison is made under the presence of multicollinearity, heteroscedasticity, and autocorrelation with altering sample size and number of covariates. We carried out Monte Carlo experiments to compare all methods in terms of prediction. All methods are improving their performance with a growing sample size in all scenarios. Expanding the number of irrelevant and candidate variables negatively affects forecasting accuracy. In the presence of low and moderate multicollinearity, MCP often produced better forecasts comparatively except for the small number of observations, where E-SCAD is dominant. In the case of extreme multicollinearity, the PLS-based factor model is superior, but with increased sample sizes, the prediction accuracy of E-SCAD significantly boosts up as compared to other methods. In the presence of all schemes of heteroscedasticity, the performance of MCP is better than all competitor models. When the number of predictors is equal to 50, Autometrics provides a similar forecast as MCP in large samples. In the presence of low and moderate autocorrelation, the MCP showed an outstanding performance in terms of forecasting except for the small sample case where E-SCAD produced a remarkable forecast. In the case of extreme autocorrelation, E-SCAD outperformed the rival techniques under both the smallest and medium samples, but as we further augment the sample equal to 400, the MCP induced a more accurate forecast comparatively.
For empirical application, macroeconomic and financial datasets are used. To compare the forecasting performance of all methods, we divide the data into two parts (i.e., data over 1973-2007 as training data and data over 2008-2020 as testing data), using both datasets. All methods are trained on training data and subsequently, their performance was evaluated through testing data. Based on RMSE and MAE, the PLS-based factor model is more robust in terms of forecasting than competitor models. is study has several recommendations, reported in Table 4.
Limitations and Future Direction.
e few limitations of this study are that it only focuses on linear models and has E-SCAD is best under a small sample. MCP is the best option in case of a large sample.
PLS-based factor model provides a better forecast under small sample. In case of a large sample, E-SCAD is superior. Heteroscedasticity MCP is best. MCP is best. MCP is best.
Autocorrelation E-SCAD is best under a small sample. MCP is the best option using more data.
E-SCAD is best under a small sample. MCP is the best option using more data.
E-SCAD is best under a small sample. MCP is the best option using a large data set.
Complexity 9
considered yearly data. e simulation part of this study is restricted to Gaussian distributed errors, but in practice, this is not essential that the errors of a model are always normal. Hence, the research can be conducted to discover the forecasting performance of advanced statistical and machine learning techniques under nonnormal residuals as well as missing observations in the data set. is study can be expanded to examine the performance of nonlinear and nonparametric algorithms like artificial neural networks, random forests, support vector machines, etc.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest regarding the publication of this study.
Supplementary Materials
Appendix | 5,973.8 | 2021-12-14T00:00:00.000 | [
"Computer Science",
"Economics",
"Mathematics"
] |
Numerical Simulation of the Smoke Distribution Characteristics in a T-Shaped Roadway
: This paper numerically analyzes the influence of heat release rate (HRR) and longitudinal ventilation velocity on smoke distribution characteristics in a T-shaped roadway when the fire source was located upstream of the T-junction. The back-layering length, critical ventilation velocity, smoke temperature, and CO concentration in the main and branched roadway were investigated and analyzed. The results showed that the ventilation velocity is the key factor influencing back-layering length, while the effect of HRR on back-layering length is gradually weakened as HRR increases. The critical ventilation velocity in the T-shaped roadway is higher than in a single-tube roadway, and the predicted model of dimensional critical ventilation velocity in a T-shaped bifurcated roadway is proposed. The correlation between average temperature (Z = 1.6 m) (both in the main roadway I and the branched roadway) and ventilation velocity fits the power function, and the variation in average temperature (Z = 1.6 m) according to HRR fits the linear formula. The relation between average concentration of CO (Z = 1.6 m) (both inside the main roadway I and the branched roadway) and longitudinal ventilation velocity follows the power relation, and the variation in average concentration of CO (Z = 1.6 m) according to HRR follows the linear function.
Introduction
The underground roadway of mines is a confined space; once fire occurs in the roadway, the high-temperature smoke generates fire pressure and a throttling effect, which can lead to airflow disorder.And the high-temperature smoke will propagate rapidly in the roadway, which will reduce the escape space and threaten workers' lives and health [1][2][3][4].The T-shaped roadway is commonly used in mines; due to the complex geometric characteristics of the T-shaped structure, the fire dynamics and smoke spread in bifurcated roadways will be more complex than in straight roadways.
The smoke distribution in bifurcated roadways and bifurcated tunnels has been studied extensively by many scholars.Xue [5] researched the effect of the bifurcated angle of inclined roadways on the velocity distribution and smoke temperature by using pyrosim software.Lu et al. [6] studied the influence of ambient temperature, ventilation velocity, and heat release rate on smoke temperature and visibility in T-shaped roadways by the numerical simulation method.Gao et al. [7] conducted a series of small-scale fire experiments, measured the back-layering length and smoke temperature in bifurcated tunnels, and proposed the temperature decay model in the main tunnel and the branched tunnel.
Huang et al. [8,9] experimentally investigated thermal smoke movement in branched tunnels, established the predicted model of the maximum ceiling temperature, and quantified the smoke back-layering length under different heat release rates and various longitudinal ventilation velocities.The empirical model is proposed to predict the smoke back-layering length in branched tunnels.Chen et al. [10] compared the smoke temperature distribution of single-line tunnels with T-shaped tunnels by experimentation and developed the modified double-exponential correlation to describe the longitudinal temperature decay process in the entire spreading region of fire smoke in T-shaped tunnel fires.Tao et al. [11] established a model tunnel to research the impact of fire location, bifurcated shaft exhaust velocity, and longitudinal velocity on ceiling temperature and they proposed the temperature decay model.Some scholars [12][13][14][15][16][17][18][19] conducted a series of experiments to study the critical velocity and the smoke backflow in single-line tunnel fires and proposed the prediction model of the dimensionless back-layering length and the ratio of ventilation velocity to critical velocity.Gannouni et al. [20,21] investigated the effect of obstacles on the back-layering length and the critical velocity in single-line tunnel fires by using Fire Dynamic Simulator (FDS) and developed the model to calculate the back-layering arrival time.Lu et al. [22] researched the longitudinal temperature distribution and maximum ceiling temperature by experimentation and the simulation method and proposed a mathematical model to predict the maximum rise in ceiling temperature and the longitudinal temperature distribution in the bifurcated tunnel.Liu et al. [23] studied the effect of longitudinal fire location on temperature distribution in bifurcated roadways and developed empirical models of the maximum temperature in the main tunnel and temperature decay in the branched tunnel.
By and large, the previous studies on bifurcation structure fires focus on tunnel fires, the decay process of ceiling temperature, the maximum rise in ceiling temperature, and the critical ventilation velocity, which were thoroughly studied in bifurcated tunnel fires.Bifurcated roadway fires have been seldom researched; in particular, the back-layering length, temperature, and CO concentration distribution in the main and branched roadways have rarely been studied.The underground space of roadways is smaller than a tunnel, so the smoke will propagate more fully and smoking control measures and emergency rescue will differ from tunnel fires.
Therefore, in order to investigate the smoke distribution characteristics in bifurcated roadways, the T-shaped roadway was selected as the roadway model; the fire source was located upstream of the T-junction, heat release rate and ventilation velocity were selected as the factors affecting the smoke diffusion, and ANSYS 18. 0 was used to simulate the smoke distribution in the main roadway and branched roadway.The critical velocity and the temperature and CO concentration distribution at breathing zone height in the main roadway and the branched roadway were thoroughly analyzed in this paper in order to provide suggestions for emergency rescue and personnel evacuation during underground roadway fires.
Physical Model
Based on the actual underground roadway, the 3D geometric model of T-shaped roadways was established.The size of the main roadway was 3 m × 3 m × 403 m; the branch roadway was in the center of the main roadway and the size was 3 m × 3 m × 200 m.All cross-sections were rectangular, with a width of 3 m and a height of 3 m.Because the upsteam smoke would have been pushed into both the main roadway and branched roadway, the fire source was set upstream of the T-junction.The distance from the center of the fire source to the velocity inlet was 189 m, the length of the fire source was 2 m, and the flame height was chosen to be 3 m; these were the limitations set by the roadway roof.The cross-section of the geometric model of the T-shaped roadway and coordinate system is shown in Figure 1, and the Z-direction is the height direction of the roadway.
Boundary Conditions and Assumptions
The standard k − ε equation and transport equation were chosen to simulate the turbulent flow of airflow and smoke diffusion [24], and the above equations have been extensively used in the fire simulation by scholars.The airflow inlet of the main roadway was set to be the velocity inlet and the airflow outlet of the main roadway and branched roadway were set to be the pressure outlet, as shown in Figure 1.The fire source was defined as the source term, and it was represented by a volumetric source.Due to the thermal radiation of the fire source, the temperature inside the roadway increases and the gas density in the T-shaped roadway would have been changed.Meanwhile, natural convection of gas in the vertical direction would be caused by gravity.So, the influence of gravity and buoyancy effect on air flow were considered in the simulation.
The following assumptions were adopted in the numerical simulation calculations: fresh airflow and fire smoke were considered incompressible fluids; there was no sliding on the roadway wall and no heat exchange between the wall and the air inside the roadway; obstacles such as mining cars and workers in the roadway were ignored; the effect of gas, dust, and blasting smoke on the airflow and smoke diffusion were not considered; the combustion process of the fire was not considered; the fire source was set as an energy source with a fixed rate of heat release [25]; the smoke generated by the fire was presented by CO; and the amount of CO generated by the fire was calculated according to Equation (1) [26]: where represents the amount of CO generated by the fire (m 3 /s), represents the heat release rate (kW), and represents the constant of CO generation rate (m 3 /kJ) (3.22 × 10 −6 m 3 /kJ) [27].
Parameters
In the simulation, the ambient temperature was 25 °C, the gravitational acceleration was 9.81 m/s 2 , and the ambient pressure was 101,325 Pa.The longitudinal ventilation velocities were 1 m/s~3 m/s in the main roadway, the heat release rates were 300 kw, 600
Boundary Conditions and Assumptions
The standard k − ε equation and transport equation were chosen to simulate the turbulent flow of airflow and smoke diffusion [24], and the above equations have been extensively used in the fire simulation by scholars.The airflow inlet of the main roadway was set to be the velocity inlet and the airflow outlet of the main roadway and branched roadway were set to be the pressure outlet, as shown in Figure 1.The fire source was defined as the source term, and it was represented by a volumetric source.Due to the thermal radiation of the fire source, the temperature inside the roadway increases and the gas density in the T-shaped roadway would have been changed.Meanwhile, natural convection of gas in the vertical direction would be caused by gravity.So, the influence of gravity and buoyancy effect on air flow were considered in the simulation.
The following assumptions were adopted in the numerical simulation calculations: fresh airflow and fire smoke were considered incompressible fluids; there was no sliding on the roadway wall and no heat exchange between the wall and the air inside the roadway; obstacles such as mining cars and workers in the roadway were ignored; the effect of gas, dust, and blasting smoke on the airflow and smoke diffusion were not considered; the combustion process of the fire was not considered; the fire source was set as an energy source with a fixed rate of heat release [25]; the smoke generated by the fire was presented by CO; and the amount of CO generated by the fire was calculated according to Equation (1) [26]: where F r represents the amount of CO generated by the fire (m 3 /s), Q represents the heat release rate (kW), and γ co represents the constant of CO generation rate (m 3 /kJ) (3.22 × 10 −6 m 3 /kJ) [27].
Parameters
In the simulation, the ambient temperature was 25 • C, the gravitational acceleration was 9.81 m/s 2 , and the ambient pressure was 101,325 Pa.The longitudinal ventilation velocities were 1 m/s~3 m/s in the main roadway, the heat release rates were 300 kw, 600 kw, 900 kw, and 1200 kw, respectively, and the total simulation cases were 35.Then, the influence of heat release rate and ventilation velocity on smoke diffusion and distribution characteristics were analyzed and discussed; the simulation cases are shown in Table 1.
Mesh
In order to validate the independence of mesh, four kinds of cell sizes were chosen to simulate the roadway fire, which are 0.15 m × 0.15 m × 0.15 m, 0.2 m × 0.2 m × 0.2 m, 0.25 m × 0.25 m × 0.25 m, and 0.28 m × 0.28 m × 0.28 m, repectively.Figure 2 shows the horizontal and vertical temperature distribution in the main roadway when the heat release rate is 600 kw and the ventilation velocity is 2 m/s.It can be seen that, when the interval sizes of the grid are 0.15 m and 0.2 m, the difference in lateral temperature distribution and in longitudinal temperature distribution are very small; so, taking into account both the computational accuracy and time cost, a cell size of 0.2 m × 0.2 m × 0.2 m is selected and the total number of cells is 678,375.
kw, 900 kw, and 1200 kw, respectively, and the total simulation cases were 35.Then, the influence of heat release rate and ventilation velocity on smoke diffusion and distribution characteristics were analyzed and discussed; the simulation cases are shown in Table 1.
Mesh
In order to validate the independence of mesh, four kinds of cell sizes were chosen to simulate the roadway fire, which are 0.15 m × 0.15 m × 0.15 m, 0.2 m × 0.2 m × 0.2 m, 0.25 m × 0.25 m × 0.25 m, and 0.28 m × 0.28 m × 0.28 m, repectively.Figure 2 shows the horizontal and vertical temperature distribution in the main roadway when the heat release rate is 600 kw and the ventilation velocity is 2 m/s.It can be seen that, when the interval sizes of the grid are 0.15 m and 0.2 m, the difference in lateral temperature distribution and in longitudinal temperature distribution are very small; so, taking into account both the computational accuracy and time cost, a cell size of 0.2 m × 0.2 m × 0.2 m is selected and the total number of cells is 678,375.
Back-Layering Length and Critical Velocity
Back-layering length is the important parameter in roadway fires.The back-layering flow is the most fatal contamination to workers who are blocked in upstream of the fire source.
Back-Layering Length and Critical Velocity
Back-layering length is the important parameter in roadway fires.The back-layering flow is the most fatal contamination to workers who are blocked in upstream of the fire source.
Figure 3 shows the relationship between back-layering length and longitudinal ventilation velocity under different heat release rates.Figure 3 indicates that the back-layering length decreases constantly as the ventilation velocity increases and presents a linear trend.Moreover, under the same ventilation velocity, the back-layering length increases as the heat release rate increases.However, when the heat release rate is more than 600 kw, the back-layering length increases quite slowly.This indicates that the influence of heat release rate on the back-layering length is gradually weakened as the heat release rate increases.
Figure 3 shows the relationship between back-layering length and longitudinal ventilation velocity under different heat release rates.Figure 3 indicates that the back-layering length decreases constantly as the ventilation velocity increases and presents a linear trend.Moreover, under the same ventilation velocity, the back-layering length increases as the heat release rate increases.However, when the heat release rate is more than 600kw, the back-layering length increases quite slowly.This indicates that the influence of heat release rate on the back-layering length is gradually weakened as the heat release rate increases.
where represents the dimensionless critical ventilation velocity, represents the critical ventilation velocity (m/s), represents the gravity acceleration (m/s 2 ), represents the hydraulic tunnel height (m), represents the dimensionless heat release rate, = / / ⁄ , represents the heat release rate (kw), represents the ambient air density (kg/m 3 ), represents the specific heat capacity (kJ/(kg⋅K)), and represents the environment temperature (K).
Li et al. [13] also acquired the correlation between dimensionless critical ventilation velocity and dimensionless heat release rate in a single-tube tunnel, which can be expressed in Equation ( 3): where = / / ⁄ and represents the tunnel height (m).In this paper, the critical ventilation velocity was determined by the X direction velocity vector beneath the ceiling in the main roadway.The critical ventilation velocities are 1.5 m/s, 1.8 m/s, 2 m/s, and 2.1 m/s when the heat release rates (HRR) are 300 kw, 600 kw, 900 kw, and 1200 kw, respectively.The critical ventilation velocity predicted by CFD [Q] 5/2 , Q represents the heat release rate (kw), ρ 0 represents the ambient air density (kg/m 3 ), c p represents the specific heat capacity (kJ/(kg•K)), and T 0 represents the environment temperature (K).Li et al. [13] also acquired the correlation between dimensionless critical ventilation velocity and dimensionless heat release rate in a single-tube tunnel, which can be expressed in Equation (3): where .Q = Q/ρ 0 c p T 0 g 1/2 H 5/2 and H represents the tunnel height (m).In this paper, the critical ventilation velocity was determined by the X direction velocity vector beneath the ceiling in the main roadway.The critical ventilation velocities are 1.5 m/s, 1.8 m/s, 2 m/s, and 2.1 m/s when the heat release rates (HRR) are 300 kw, 600 kw, 900 kw, and 1200 kw, respectively.The critical ventilation velocity predicted by CFD simulation under different HRRs is shown in Figure 4a, and the critical ventilation velocities calculated by equations proposed by Wu [12] and Li [13] are also presented in Figure 4a for comparison.It can be seen that the critical ventilation velocity increases with the increase in heat release rate and the increased value decreases slowly.Also, we can find that the critical ventilation velocity is higher than Wu's and Li's model, which means the critical ventilation velocity in the T-shaped bifurcated roadway is higher than in the single-tube roadway when the fire is located upstream of the T-junction.This is because, when the airflow passes through the T-junction, a portion of ventilation mass flow is pushed into the branched roadway, so the actual ventilation mass flow in the main roadway of the T-shaped bifurcated roadway is lower than in the single-tube roadway.Figure 4b presents the correlation between dimensional critical ventilation velocity and dimensional heat release rate.According to Figure 4b, ; hence, the prediction model of dimensional critical ventilation velocity in the T-shaped bifurcated roadway can be expressed as: It can be found that Equation ( 4) is similar to Equations ( 2) and ( 3) and is more close to Li's model.simulation under different HRRs is shown in Figure 4a, and the critical ventilation velocities calculated by equations proposed by Wu [12] and Li [13] are also presented in Figure 4a for comparison.It can be seen that the critical ventilation velocity increases with the increase in heat release rate and the increased value decreases slowly.Also, we can find that the critical ventilation velocity is higher than Wu's and Li's model, which means the critical ventilation velocity in the T-shaped bifurcated roadway is higher than in the single-tube roadway when the fire is located upstream of the T-junction.This is because, when the airflow passes through the T-junction, a portion of ventilation mass flow is pushed into the branched roadway, so the actual ventilation mass flow in the main roadway of the T-shaped bifurcated roadway is lower than in the single-tube roadway.Figure 4b presents the correlation between dimensional critical ventilation velocity and dimensional heat release rate.According to Figure 4b, is directly proportional to / ; hence, the prediction model of dimensional critical ventilation velocity in the T-shaped bifurcated roadway can be expressed as: It can be found that Equation ( 4) is similar to Equations ( 2) and ( 3) and is more close to Li's model.To further validate the accuracy of the numerical model adopted in this paper, the critical ventilation velocities predicted by CFD were compared with data from Li's small-scale tests [13].Li's small-scale experimental tests were conducted in a 12 m long model tunnel.The fire source, which was a 100 mm diameter porous bed burner, was located in the center of the tunnel model.We built the same simulated tunnel model as Li's, and the numerical model was the same as mentioned above (see section 2.2).The fire source was set as a volumetric source with a height of 0.25 m and a diameter of 0.1 m.Table 2 presents the critical ventilation velocity predicted by CFD under different cases in Li's experiment, and the comparison between simulation values of critical ventilation velocity and tested results by Li is shown in Figure 5.According to Table 2 and Figure 5, there is a reasonable agreement between the simulation and the experiment tests.To further validate the accuracy of the numerical model adopted in this paper, the critical ventilation velocities predicted by CFD were compared with data from Li's smallscale tests [13].Li's small-scale experimental tests were conducted in a 12 m long model tunnel.The fire source, which was a 100 mm diameter porous bed burner, was located in the center of the tunnel model.We built the same simulated tunnel model as Li's, and the numerical model was the same as mentioned above (see Section 2.2).The fire source was set as a volumetric source with a height of 0.25 m and a diameter of 0.1 m.Table 2 presents the critical ventilation velocity predicted by CFD under different cases in Li's experiment, and the comparison between simulation values of critical ventilation velocity and tested results by Li is shown in Figure 5.According to Table 2 and Figure 5, there is a reasonable agreement between the simulation and the experiment tests.
Longitudinal Temperature Profile in the Main and Branched Roadway
Inhaling high-temperature smoke is the main factor to cause casualties in fire accidents, so the temperature distribution at breathing zone height (Z = 1.6 m) was analyzed in this paper.The tolerance time in a fire environment is 12 min for a person [28], so the escape temperature that a person can escape from the fire environment successfully is set to be 60 °C.
Figure 6 presents the longitudinal temperature profile at breathing zone height of the main roadway when heat release rates are 300 kw, 600 kw, and 900 kw, respectively.Figure 6a illustrates that, when the longitudinal ventilation velocity is 1 m/s, the maximum temperature is located at the fire source and the value is 101.636°C, and the influence range of the fire source on upstream temperature is 21.4 m, which can be explained by smoke backflow and thermal convection.When the longitudinal ventilation velocities are 1.5 m/s, 2 m/s, 2.5 m/s, and 3 m/s, respectively, the influence range of fire source on upstream temperature is 0 m and the smoke temperature at the fire source increases dramatically from 25 °C to 41.16 °C, 37.13 °C, 34.73 °C, and 33.07 °C, respectively; then, the temperature rises slightly and becomes steady along the longitudinal direction (X direction).The smoke temperatures in the main roadway Ⅰ are all lower than escape temperature when the ventilation velocities are 1 m/s~3 m/s.As shown in Figure 6b, when the longitudinal ventilation velocities are 1 m/s and 1.5 m/s, the values of maximum temperature are 179 °C and 140 °C, respectively, and the influence ranges of fire source on upstream temperature are 24.2 m and 14 m, respectively.When the longitudinal ventilation velocities are 2 m/s, 2.5 m/s, and 3 m/s, respectively, the smoke temperature at the fire source increase dramatically from 25 °C to 37.366 °C, 34.869 °C, and 33.2 °C, respectively; then, the temperature rises slightly and becomes steady along the longitudinal direction (X direction).The smoke temperatures in the main roadway Ⅰ are all
Longitudinal Temperature Profile in the Main and Branched Roadway
Inhaling high-temperature smoke is the main factor to cause casualties in fire accidents, so the temperature distribution at breathing zone height (Z = 1.6 m) was analyzed in this paper.The tolerance time in a fire environment is 12 min for a person [28], so the escape temperature that a person can escape from the fire environment successfully is set to be 60 • C.
Figure 6 presents the longitudinal temperature profile at breathing zone height of the main roadway when heat release rates are 300 kw, 600 kw, and 900 kw, respectively.Figure 6a illustrates that, when the longitudinal ventilation velocity is 1 m/s, the maximum temperature is located at the fire source and the value is 101.Then, the effect of longitudinal ventilation velocity and heat release rate on average temperature at breathing zone height inside the bifurcated roadway is discussed thoroughly.Figure 7 presents the average temperature at Z = 1.6 m inside the main roadway Ⅰ and branched roadway under different ventilation velocities and different heat release rates.Then, the effect of longitudinal ventilation velocity and heat release rate on average temperature at breathing zone height inside the bifurcated roadway is discussed thoroughly.Figure 7 presents the average temperature at Z = 1.6 m inside the main roadway I and branched roadway under different ventilation velocities and different heat release rates.It can be observed from Figure 7 that the average temperatures in the main roadway Ⅰ and branched roadway are lower than 60 °C when the HRR is 300 kw and the ventilation velocities are 1 m/s, 1.5 m/s, 2 m/s, 2.5 m/s, and 3 m/s and when the HRR is 600 kw and the ventilation velocities are 2 m/s, 2.5 m/s, and 3 m/s and when the HRR is 900 kw and the ventilation velocities are 2.5 m/s and 3 m/s.The average temperatures in the branched roadway under different cases are all slightly lower than that in the main roadway Ⅰ.When the heat release rate is 300 kw, the maximum temperature difference between the main roadway Ⅰ and the branched roadway is 3.1 °C and the minimum temperature difference is 0.7 °C.When the heat release rate is 600 kw, the maximum temperature difference is 2.12 °C and the minimum temperature difference is 0.585 °C.When the heat release rate is 900 kw, the maximum temperature difference is 6.47 °C and the minimum temperature difference is 1.43 °C.When the heat release rate is 1200 kw, the maximum temperature difference is 8 °C and the minimum temperature difference is 0.3 °C.
Figure 7 also shows that, under different heat release rates, the average temperatures in the main roadway Ⅰ and branched roadway decrease as the ventilation velocity increases.This occurs because the thermal convection between airflow and smoke is intensified with the increase in ventilation velocity.And the temperature decay fits the power function, as shown in Equation ( 5): where represents the average temperature at Z = 1.6 m (°C), represents longitudinal ventilation velocity (m/s), and and represent dimensionless coefficient, respectively.
The fitting curves are shown with solid lines and dashed lines in Figure 7.The values of and are displayed in Table 3.It can be seen from Figure 7 and Table 3 the values of R-squares are all above 0.98; it can be concluded that the variation in average temperature in the main roadway Ⅰ and branched roadway according to ventilation velocity can be accurately describe by Equation ( 5).It can be observed from Figure 7 that the average temperatures in the main roadway I and branched roadway are lower than 60 • C when the HRR is 300 kw and the ventilation velocities are 1 m/s, 1.5 m/s, 2 m/s, 2.5 m/s, and 3 m/s and when the HRR is 600 kw and the ventilation velocities are 2 m/s, 2.5 m/s, and 3 m/s and when the HRR is 900 kw and the ventilation velocities are 2.5 m/s and 3 m/s.The average temperatures in the branched roadway under different cases are all slightly lower than that in the main roadway I.When the heat release rate is 300 kw, the maximum temperature difference between the main roadway I and the branched roadway is 3.1 • C and the minimum temperature difference is 0.7 • C. When the heat release rate is 600 kw, the maximum temperature difference is 2.12 • C and the minimum temperature difference is 0.585 • C. When the heat release rate is 900 kw, the maximum temperature difference is 6.47 • C and the minimum temperature difference is 1.43 • C. When the heat release rate is 1200 kw, the maximum temperature difference is 8 • C and the minimum temperature difference is 0.3 • C.
Figure 7 also shows that, under different heat release rates, the average temperatures in the main roadway I and branched roadway decrease as the ventilation velocity increases.This occurs because the thermal convection between airflow and smoke is intensified with the increase in ventilation velocity.And the temperature decay fits the power function, as shown in Equation ( 5): where T represents the average temperature at Z = 1.6 m ( • C), υ represents longitudinal ventilation velocity (m/s), and a and b represent dimensionless coefficient, respectively.The fitting curves are shown with solid lines and dashed lines in Figure 7.The values of a and b are displayed in Table 3.It can be seen from Figure 7 and Table 3 the values of R-squares are all above 0.98; it can be concluded that the variation in average temperature in the main roadway I and branched roadway according to ventilation velocity can be accurately describe by Equation (5).
The variation in average temperature with the increasing heat release rate in the bifurcated roadway is shown in Figure 8.It can be observed that the average temperatures in the main roadway I and branched roadway increase constantly as the heat release rate increases, and the predicted data of average temperature can be correlated to HRR with the following equation: where T represents the average temperature at Z = 1.6 m ( • C), Q represents the heat release rate (kw), and c and d represent the dimensionless coefficient, respectively.The variation in average temperature with the increasing heat release rate in the bifurcated roadway is shown in Figure 8.It can be observed that the average temperatures in the main roadway Ⅰ and branched roadway increase constantly as the heat release rate increases, and the predicted data of average temperature can be correlated to HRR with the following equation: where represents the average temperature at Z = 1.6 m (°C), represents the heat release rate (kw), and and represent the dimensionless coefficient, respectively.The fitting lines are shown with solid lines and dashed lines in Figure 8.The values of and are displayed in Table 4.It can be seen from Figure 8 and Table 4 the values of R-squares are all above 0.99, which indicates that the variation in average temperature in the main roadway Ⅰ and branched roadway according to heat release rate can be accurately predicted by Equation ( 6). 4. It can be seen from Figure 8 and Table 4 the values of R-squares are all above 0.99, which indicates that the variation in average temperature in the main roadway I and branched roadway according to heat release rate can be accurately predicted by Equation ( 6).
The Profile of CO Concentration in the Main and Branched Roadway
Figure 9 presents the smoke propagation in the T-shaped roadway when the HRR is 600 kw and longitudinal ventilation velocity is 1.5 m/s.According to Figure 9, it can be seen that a high concentration of CO mainly gathers near the top plate of the fire source and spreads upstream of the fire source at a certain distance.Under the influence of longitudinal ventilation, the CO generated by the fire gradually spreads downstream of the main roadway and inside the branched roadway and then is rapidly diluted.The CO concentration in the main roadway Ⅰ and branched roadway are much lower than near the fire source.Based on previous fire accident analyses, we know that high-temperature smoke contains CO, and casualties were partly caused by inhaling CO, so the distribution of CO concentration at breathing zone height is thoroughly analyzed and discussed in this section.Figure 10 shows the concentration distribution of CO at breathing zone height in the main roadway Ⅰ and branched roadway under different longitudinal ventilation velocities when the HRR is 600 kw.Based on previous fire accident analyses, we know that high-temperature smoke contains CO, and casualties were partly caused by inhaling CO, so the distribution of CO concentration at breathing zone height is thoroughly analyzed and discussed in this section.Figure 10 shows the concentration distribution of CO at breathing zone height in the main roadway I and branched roadway under different longitudinal ventilation velocities when the HRR is 600 kw.
We can see clearly from Figure 10 that the concentration of CO at the height of the breathing zone in the main roadway I and branched roadway gradually decreases and tends to be of uniform distribution as the longitudinal ventilation velocity increases.When the ventilation velocities are 1 m/s and 1.5 m/s, the concentration of CO at the height of the breathing zone in the main roadway I is higher than in the branched roadway.And, when the ventilation velocities are 2 m/s, 2.5 m/s, and 3 m/s, respectively, the concentration of CO in the branched roadway is slightly higher than in the main roadway I.
The CO volume concentration in the safe evacuation passage is less than 500 ppm in the fire environment [28], so 500 ppm is chosen to be the critical concentration to escape safely for a person.The average concentration of CO at the breathing zone height inside the main roadway I and branched roadway is shown in Figure 11.
According to Figure 11, it can be seen that the average concentrations of CO in the main roadway I and branched roadway are all lower than 500 PPM.When the HRR is 900 kw, the maximum concentration difference between the main roadway I and branched roadway is 84.444PPM and the minimum concentration difference is 5.676 PPM.When the HRR is 600 kw, the maximum concentration difference is 19.72 PPM and the minimum concentration difference is 4.776 PPM.When the HRR is 300 kw, the maximum concentration difference is 54.094 PPM and the minimum concentration difference is 2.686 PPM.In addition, the concentration differences of CO between the main roadway I and the branched roadway decrease as the ventilation velocity increases.And we also can find that the average concentrations of CO inside the main roadway I and branched roadway decrease constantly with the increase in longitudinal ventilation velocity, which fits the power function.The fitting curves are shown in Figure 11 and the fitting functions are shown in Table 5.The values of R-squares are all above 0.95 according to Figure 10 Table 5; the equations describing the variation in average concentration of CO are shown in Table 5.We can see clearly from Figure 10 that the concentration of CO at the height of the breathing zone in the main roadway Ⅰ and branched roadway gradually decreases and tends to be of uniform distribution as the longitudinal ventilation velocity increases.When the ventilation velocities are 1 m/s and 1.5 m/s, the concentration of CO at the height of the breathing zone in the main roadway Ⅰ is higher than in the branched roadway.And, when the ventilation velocities are 2 m/s, 2.5 m/s, and 3 m/s, respectively, the concentration of CO in the branched roadway is slightly higher than in the main roadway Ⅰ.
The CO volume concentration in the safe evacuation passage is less than 500 ppm in the fire environment [28], so 500 ppm is chosen to be the critical concentration to escape safely for a person.The average concentration of CO at the breathing zone height inside the main roadway Ⅰ and branched roadway is shown in Figure 11.According to Figure 11, it can be seen that the average concentrations of CO in the main roadway Ⅰ and branched roadway are all lower than 500 PPM.When the HRR is 900 kw, the maximum concentration difference between the main roadway Ⅰ and branched roadway is 84.444PPM and the minimum concentration difference is 5.676 PPM.When the HRR is 600 kw, the maximum concentration difference is 19.72 PPM and the minimum concentration difference is 4.776 PPM.When the HRR is 300 kw, the The effect of HRR on average concentration of CO inside the main roadway I and the branched roadway are discussed then.Figure 12 shows the variation in the average concentration of CO according to the heat release rate.It can be seen that, as the heat release rate increases, the average concentration of CO inside the main roadway I and branched roadway increase linearly.And it can be described by the linear function; the fitting lines are shown in Figure 12 and the equations are presented in Table 6.branched roadway increase linearly.And it can be described by the linear function; the fitting lines are shown in Figure 12 and the equations are presented in Table 6.Table 6 illustrates that the value of R-squares is 0.89406 when the ventilation velocity is 1 m/s and the type of roadway is branched roadway.However, the values of R-squares are all above 0.94 in other cases.It can be explained that, when the ventilation velocity is 1 m/s, the volume of fresh airflow pushed into the branched roadway is much lower than in the main roadway Ⅰ, so the mixing of smoke and fresh airflow is uneven, which leads to the disordered distribution of smoke in the branched roadway.Therefore, it can be concluded that the average concentration of CO inside the main roadway Ⅰ and branched roadway can be predicted better by this kind of equation.
FireFigure 1 .
Figure 1.The physical model of the T-shaped roadway in cross-section.
Figure 1 .
Figure 1.The physical model of the T-shaped roadway in cross-section.
Figure 2 .
Figure 2. Validation of numerical method by grid independence.(a) The vertical temperature distribution at X = 30 m in the main roadway.(b) The longitudinal temperature distribution at Z = 2.9 m in the main roadway.
Figure 2 .
Figure 2. Validation of numerical method by grid independence.(a) The vertical temperature distribution at X = 30 m in the main roadway.(b) The longitudinal temperature distribution at Z = 2.9 m in the main roadway.
Figure 3 .
Figure 3. Relationship between back-layering length and longitudinal ventilation velocity.Critical ventilation velocity is also a key parameter for ensuring the safety and proper emergency evacuation of workers in roadway fires.The critical ventilation velocity in single-tube tunnel fires has been researched thoroughly.Wu and Bakar [12] established a mathematical model for dimensionless critical ventilation velocity and dimensionless heat release rate in a single-tube tunnel, as shown in Equation (2): = = 0.40 0.20 ⁄ / , ≤ 0.20 0.40, > 0.20(2)
Figure 3 .
Figure 3. Relationship between back-layering length and longitudinal ventilation velocity.Critical ventilation velocity is also a key parameter for ensuring the safety and proper emergency evacuation of workers in roadway fires.The critical ventilation velocity in single-tube tunnel fires has been researched thoroughly.Wu and Bakar [12] established a mathematical model for dimensionless critical ventilation velocity and dimensionless heat release rate in a single-tube tunnel, as shown in Equation (2):
c
represents the dimensionless critical ventilation velocity, v c represents the critical ventilation velocity (m/s), g represents the gravity acceleration (m/s 2 ), H represents the hydraulic tunnel height (m), .Q represents the dimensionless heat release rate, .
Figure 4 .
Figure 4.The relationship between critical ventilation velocity and heat release rate in the T-shaped bifurcated roadway.(a) Comparison between simulation and previous model; (b) the correlation between dimensional critical ventilation velocity and dimensional heat release rate.
Figure 4 .
Figure 4.The relationship between critical ventilation velocity and heat release rate in the T-shaped bifurcated roadway.(a) Comparison between simulation and previous model; (b) the correlation between dimensional critical ventilation velocity and dimensional heat release rate.
Figure 5 .
Figure 5.Comparison of critical ventilation velocity predicted with data from Li's experiments [13].
Figure 5 .
Figure 5.Comparison of critical ventilation velocity predicted with data from Li's experiments [13].
Figure 6 .
Figure 6.The temperature profile (Z = 1.6 m) in the main roadway under different ventilation velocities and different heat release rates: (a) 300 kw; (b) 600 kw; (c) 900 kw.
Figure 6 .
Figure 6.The temperature profile (Z = 1.6 m) in the main roadway under different ventilation velocities and different heat release rates: (a) 300 kw; (b) 600 kw; (c) 900 kw.
FireFigure 7 .
Figure 7.The average temperature (Z = 1.6 m) inside the main roadway Ⅰ and branched roadway under different ventilation velocities and different heat release rates.
Figure 7 .
Figure 7.The average temperature (Z = 1.6 m) inside the main roadway I and branched roadway under different ventilation velocities and different heat release rates.
Figure 8 .
Figure 8.The variation in average temperature according to heat release rate.
Figure 8 .
Figure 8.The variation in average temperature according to heat release rate.The fitting lines are shown with solid lines and dashed lines in Figure 8.The values of c and d are displayed in Table4.It can be seen from Figure8and Table4the values of R-squares are all above 0.99, which indicates that the variation in average temperature in the main roadway I and branched roadway according to heat release rate can be accurately predicted by Equation (6).
Figure 9
Figure9the smoke propagation in the T-shaped roadway when the HRR is 600 kw and longitudinal ventilation velocity is 1.5 m/s.According to Figure9, it can be seen that a high concentration of CO mainly gathers near the top plate of the fire source and spreads upstream of the fire source at a certain distance.Under the influence of longitudinal ventilation, the CO generated by the fire gradually spreads downstream of the main roadway and inside the branched roadway and then is rapidly diluted.The CO concentration in the main roadway I and branched roadway are much lower than near the fire source.
Figure 9 .
Figure 9.The volume concentration of CO in the T-shaped roadway (600 kw and 1.5 m/s).
Figure 9 .
Figure 9.The volume concentration of CO in the T-shaped roadway (600 kw and 1.5 m/s).
Fire 2024, 7 , 16 Figure 10 .
Figure 10.The concentration distribution of CO at breathing zone height in the T-shaped roadway (600 kw).
Figure 10 .
Figure 10.The concentration distribution of CO at breathing zone height in the T-shaped roadway (600 kw).
Figure 11 .
Figure 11.The average concentration of CO (Z = 1.6 m) inside the main roadway I and branched roadway.
Figure 12 .
Figure 12.The variation in the average concentration of CO (Z = 1.6 m) according to heat release rate.
Table 1 .
The parameters and simulation cases.
Table 1 .
The parameters and simulation cases.
Dimensions of Tunnels Heat Release Rate/kw Ambient Temperature/
• C v c /m•s −1
Table 3 .
The values of coefficients a and b.
Table 4 .
The values of coefficients c and d.
Table 5 .
The correlation equations between the average concentration of CO and longitudinal ventilation velocity.is the average concentration of CO at the breathing zone height (PPM). C | 9,675.4 | 2024-03-03T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Remote Laboratories: Bridging University to Secondary Schools A live demonstration at experiment@portugal
—e-lab is a remote laboratory infrastructure powered by a software framework that allows the operation and data retrieve from a remote apparatus. In this demonstration we will present the e-lab interface properties and its modus operandi, giving as well some topics of complimentary software use for data interpretation and analysis
INTRODUCTION
The best education systems understand that teacher's qualification is the most influential factor in improving students' outcomes. However, their performance must be supported continuously at the level of training and motivation. Much of the motivation for teaching comes from the stimulus generated by their students. But in today's society, the student's motivation involves the use of emerging technology, as they early in their lives became acquaint and dependent on it. The use of online resources have a predominant impact on that motivation and most students, when they have access to the Internet, choose to study using such contents to the detriment of the classic book. The paradigm of the mentor/book is being changed by the use of online classes and/or movies where the information is concise and concentrated, giving the student a greater degree of freedom in every topic he needs.
However, on teaching physics or any other science, it is essential to create a bond of trust in what is learned through online resources. The use of "seals" settled over the sources of information improves contents credibility meant for public perception and should be an important factor in the decision to select it. On the other hand, the ordinary simulations (applets) often used to interpret contents more easily, belong to the "theory" class because they leverage on mathematical models. Whereas the practical activity (the "observation") is the main source of credit for science, the teaching of science in particular always need the comparison between this "observation" and theory. That's where confidence comes from. e-lab [1] is the bridge between these two concepts: (i) incorporates the latest technology for distance learning, allowing to perform real experiments in a fully robotic remote environment and moreover (ii) allows to watch them via video as well as to collect the experiment data set. These two facts create the correct confidence on the users of its real existence and after fitting the theoretical model allow the full trust in the theory in support of the experimental evidence.
II. THE RENEWED E-LAB
In operation at IST since 2001, e-lab recently underwent a usability study, currently offering a simpler and more concise to use graphic user interface, allowing an immediate and easy access to the required laboratory. The contents are supported by the portal e-escola [2]. Having been used in the basic university physics disciplines of Bolonha's first cycle of studies, it was established at the present time an extension of content to cover also the secondary education level, with some experiences being specially adapted for this purpose as well as their online content.
In the other end a new recent evolution is the adaptation of e-lab to post-graduated studies. Effectively Europe is facing a new age where universities are learning the importance of cooperating among themselves to achieve the necessary "critical mass" to be able to teach very specialized courses. As an example we can cite the first European course in nuclear "fusion science and engineering": as many students are apart and they need to accomplish some credits in laboratories, Fusenet will support a few remote laboratories where advanced experiments on plasma physics will be available.
III. DEMONSTRATION PROPOSAL
The proposed demonstration to be arranged at experi-ment@portugal should cover some examples of remote access to some e-lab laboratories presently available, giving to the users the experience and control of them. In parallel, some data analysis based on MSExcel and Origin will take place to demonstrate how this complementary computer aided tools can help on the underlying physics interpretation. This presentation will be done with 2 wide screens connected to a dual VGA to demonstrate a live classroom illustration. The dual display will allow to (i) show the live control of an experiment with (ii) the current video being displayed in the second monitor using QuickTime or VLC (iii) and running in parallel the data analysis software tools (iv) were passing some graphical data into text documents.
E-lab have proven to be a platform capable to cover experiments from basic school up to pos-graduated education and can manage a great variety of interfaces to accommodate almost any class of laboratories. We hope to offer at experiment@portugal a good view of its potentialities and practice.
ACKNOWLEDGMENT e-lab is a joint undertake of many people most of them regular students at IST Physics Engineering MsC which voluntary contribute to the e-lab development. For them the author leaves here a particular recognition and gratitude. | 1,117.2 | 2012-01-22T00:00:00.000 | [
"Computer Science"
] |
The Early Basilica Church , El-Ashmonein Archaeological Site , Minia , Egypt : Geo-Environmental Analysis and Engineering Characterization of the Building Materials
El-Ashmonein is a significant archaeological site with different buildings from various eras. Between the villages of El-Idara and El-Ashmonein are there mains of Hermopolis, one of the ancient Egyptian metropolis capitals of the fifteenth century of Upper Egypt, called the hare. The buildings in this archaeological site are exposed to many causes of destruction and damage. The remaining structures and granite free standing columns in this area are suffered from plenty of geo-environmental and geotechnical problems. The main objectives of this study are 1) to assess the current state of preservation of this important archaeological site, especially the basilica church with its free standing huge columns, 2) to analyze the different actions which cause the destruction of the archaeological site, in particular the old flash floods and earthquakes, and 3) to identify the geochemical and engineering properties of the construction materials of the granitic columns and other limestone structures of the basilica church by using different kind of sophisticated analytical and diagnostic tools and methods. The multi-criteria analysis allowed the integration of several elements for mapping the vulnerable zones. Results revealed that about 80% of the study area was exposed to high and medium old floods vulnerability because of the vicinity to the Nile River. The structural and non-structural measures recommended in this research will help the decision makers and planners to effectively develop strategies for future site management, intervention retrofitting and rehabilitation of this unique archaeological site.
Introduction
El-Ashmonein is a small village located in Mellway District of Miniagovernerate, which is about 246 km from Cairo, situated on the western bank of Nile river, 27 m elevation and one of the ancient Egyptian metropolis capitals of the fifteenth century of Upper Egypt is called the "hare" [1] as shown in Figure 1 (old map for different old cities including El-Ashmonein archaeological site) and Figure 2 (cross section for Mellawy by using GIS software including El-Ashmonein).El-Ashmonein archaeological site contains several old temples such as Thoth temple, Roman temples and Basilica church.But the importance of this archaeological site has not received significant attention since it needs protection and repairs.Therefore, the objectives of this research highlight the importance of this archaeological site, the reasons of the destruction of the archeological site including the focal study upon Basilica church, the current state of preservation for the archaeological site, identification of the construction materials of the free-standing stone columns of the Basilica church, and identification of the geochemical, physical and mechanical characterizations of the construction materials.The authors visited the archaeological site of El-Ashmonein for engineering surveying, taking pictures, sampling and understanding the current state of preservation for the archaeological site.
Several authors spoke about this archaeological site such as A. J. Spencer (1982).In this year, the excavations by the British Museum in the temple of Thoth at El-Ashmonein revealed building remains consisting of the lower courses of a wall and one side of a door, with an adjacent area of limestone paving [2].According to Jeffrey Spencer (1983), this volume of book contains the results of a survey carried out by the British museum expedition on the archaeological site of El-Ashmonein the ancient Hermopolis Magna in middle Egypt.The results of the survey are shown in new map of the site, showing the topography of the site itself.The survey objective was to put a framework for the future discoveries.It was told that the earliest attempt to record the topography of this site was made by the French expedition of 1798, and, although this plan provided a useful record of the state of the mounds at that time the surface details are now altered [3].Marek Barański (1987Barański ( -1990) said that the church (basilica) building with atrium and adjacent rooms formed an independent complex set in the very center of the ancient town.The building of the basilica destroyed structures previously existing on the spot.Unfortunately, after the collapse of the basilica, its remnants were partly dismantled in quest for limestone blocks the excavations carried out in the forties discovered that the basilica foundations were constructed of blocks and elements of decoration taken from Hellenistic buildings [4].Magdy Mansour Badway (1996) added in columns development if the architecture is built for the purpose of shelter, governance, entertainment, meeting, education, etc., and to achieve its purposes it was necessary to have walls of carrier or coverslip to the roofs and pillars and pillars and jealousy.Limestone or granite was used by all peoples throughout the ages.He said that the pillars have origins dating back to the early ages when the early inhabitants of the Nile supported their plant huts and their bishop's boughs with branches of trees or branches and trees.The researcher discussed the development of pillars in the Pharaonic era and their forms, Greek, Roman, Coptic and Islamic columns [5].S. R. Snape (1989) this book about one of the most important monuments at El-Ashmounein called the temple of Domitian, in this book the author showed the map of central area of Hermopolis Magna by the British museum expedition survey, 1980-1981 (R. Andrews), and also the map of Hermopolis Magna , October 1800 (E.Jomard) [6].Shaw, I. and Jameson, R. (Eds.)(1999) this dictionary told us that (el-Ashmunein; anc.Khmun) Egyptian site located close to the modern town of Mallawy, which was the cult-center of the god Thoth and capital of the 15th Upper Egyptian province.It was subject to extensive plundering in the early Islamic period, but there are still many remains of temples dating to the Middle and New Kingdoms, including a pylon (ceremonial gateway) constructed by Ramesses II (c.1290-1224 BC).The latter contained stone blocks quarried from the abandoned temples at the nearby site of EL-AMARNA (c.1353-1335 BC).There is also a comparatively well-preserved COPTIC basilica built entirely in a Greek architectural style and reusing stone blocks from a Ptolemaic temple [7].G. Roeder: Hermopolis 1929-39 (Hildesheim, 1959); J.D. Cooney: Amarna reliefs from Hermopolis in American collections (Brooklyn, 1965); G. Roeder and R. Hanke: Amarna-reliefs aus Hermopolis, 2 vols (Hildesheim, 1969-78); A. J. Spencer: Excavations at el-Ashmunein, 4 vols (London, 1983-93).Maehler, H (2012) He said that Hermopolis Magna, whose modern name El-Ashmunein is derived through Coptic Shmûn from the original Kmunu ("City of the Eight" sc.gods), is situated on the west bank of the Nile some 40 km south of el-Minia and 7 km north of Mellawi and discussed Greek history and roman history [8].S. L. Lippert (2012) explained Designation of a demotic manuscript (third century bce) written on the recto of a papyrus found at Hermupolis Magna (Ashmunein) in Upper Egypt in 1938, now in the Egyptian Museum at Cairo (P.Cairo JE 89127 − 89130 + 89137 − 89143); on the verso is a mathematical text [9].Basem Samir Elsharqwi (2005) in his book showed that Minya Governorate is one of the most important governorates of Egypt for its average location and extension along the Nile River about 135 km and the average width of about 18 km, and because it contains many beautiful sites called the bride of the Upper and Nile, and showed the archeological sites in Minya Governorate generally and the old buildings, houses and palaces [10].Abdelhaim Nour Eldin (2007) he showed the importance of El Minya and illustrated a lot of sites such as sultan corner, Elkom Alhmar, Beni Hassan, Establ Antar, small Establ, Sharouna and so on [11].Ezaa Qadous (2010) illustrated the monuments which belongs to Greco roman era in upper, middle and Lower Egypt, and told us about El-Ashmounein site and its monuments, so he said
Destruction of Basilica Church
No early remains had been found there, but this is probably the result of the destruction that happened to the city.The site is in abroad and rich area of the Nile valley.It is now very badly ruined, with small parts of temples standing above the general rubble.Only the roman period agora with its early Christian basilica is at all well preserved, giving evidence of the great prosperity of the town in late monuments as shown in Figure 3 general current state of the archaeological site of El-Ashmonein and the basilica church.
Different reasons for the destruction of the archaeological site at El-Ashmounein according to Spencer [17]: -During the roman period the character of the sacred areas of El-Ashmounein was seriously changed by the redevelopment of the site as Hellenistic city.
-Many new public buildings and temples of classical style were constructed in and around the temple complex, especially along Antinoe street and the Dromos of Hermes.
-The extensive alteration to the central city inevitably swept away some of the older pharaonic structure where a good portion of the new kingdom temple of Thoth had already gone in the Potlemaic period to clear a path for Dromos.-Throughout the fifth and sixth century the enclosure of Thoth was rapidly overlaid by deep deposits of rubbish and fill, but few built structures except for small squatters' huts.
-Subsequent rubbish dumping from the seventh century AD until well into the Islamic period is marked by discarded ceramics of this age in pits sunk into the fifth century fill.
-During the fifth century the rise in ground level on the old temple site was very rapid so in some places reaching more than three meters above the temple floor level caused many problems for the columns where the columns moved and collapsed upside down as shown in Figure 4, so that the flash flooding the main reason for the destruction of the site.
-Small building formed of baked bricks and reused stones from the temples had been founded above the fifth century fill.
-Small building formed of baked bricks and reused stones from the temples had been constructed together with the extensive pitting of the ground resulted in the fill being repeatedly distributed over a long period.
-The temples lay exposed until the fifth century AD made them vulnerable to quarrying, either for building stone or as a source of limestone for lime burning.
-A considerable quantity of stone used for the basilica church and other blocks have been found reused in a small building above the temple site itself.-Prominent architectural features comprising large masses of stone masonry such as pylons and large gateways, would have been noted as a source of limestone to feed the lime kiln, three lime kilns were founded by the German team, sited immediately north of pylon in order to exploit it for limestone.
-Other monuments were collected and brought to the spot of burning, including statues of the Ramesses II and Nectanebo I and the great stela of the latter king.
-In 1984 the British museum worked in the site and found out another large kiln used for quicklime preparation.
-Quarrying activities was evident in all sites and the reason for the poor state of preservation of the pharaonic and later temples.
-Also, one of the most important factors of destruction of the basilica church and the site generally was the anthropogenic factors so the columns was used as a building material for the modern houses and other buildings.
Architectural and Structural Description of Basilica Church
At the beginning of 4 th century AD, Christianity finally overcame the former pagan religions in Egypt.Therefore, from that time onward, a sustained architectural effort aimed at the building of Christian churches.During the Christian period, these churches represent the only type of edifice to be constructed in monumental proportions.The typology of Egyptian churches varied considerably depending on their location (for example, those built on the Mediterranean coast as opposed to the Nile Valley), whether they were built in urban or rural settings and whether or not they were connected to a monastery.However, certain generalities may be defined, and as the following is the style of basilica church called (Basilica with Transept), between the fifth and sixth centuries AD, we find, only in urban settings, the basilica with transept.This type of church was mainly located in the delta or middle Egypt regions, and was considered to be an import of architectural styles from Constantinople and the Byzantine world.The rather rare typology consists of a basilica with a nave separated from the side aisles by two ranks of columns which in general also encircle the transept.A common trait of the transept was that its north and south ends are either rectilinear or semi-circular.Examples of this type of church include [18]: Al-Hawariya (Marea)-the 6th Century, Sanctuary of St. Menas-the 5th and the 6th Centuries and Hermopolis (El-Ashmonein)-the 5th Century.
The predominant type of Christian architecture in Egypt is Basilican; it has been fashion to regard as adopted from the secular roman basilica by the early Christians, according to the architect Scott, who shows the reason for assigning an earlier and independent origin of this building.According to his theory the germ of the Christian basilica was a simple oblong aisles room divided by a cross arch, beyond which lay an alteration detached from the wall.This germ was developed by addition of side aisles returned across the entrance end, over these upper aisles were next constructed and transepts added with small oratories or chapels in various parts of building.On the other hand, the secular basilica is shown to have begun with a colonnade enclosing an open area, to have begun roofed in, to have lost the colonnades, and to have passed into a lofty hall covered with a brick vaulting.But Butler has another idea that it is of course clear that the two separated developments at one point closely coincided and the resemblance at the first accidental became in later times conscious and designed, but the secular basilicas of the fourth century are very different from the Christian churches of the epoch which resemble rather the pagan basilicas of three centuries earlier [19].
The great cathedral was a big basilica building with side galleries and built in the center of the city.It has a colonnaded transept with exedrae at both ends.
The church building with atrium and adjacent rooms formed an independent complex set in the very center of the ancient town.The basilica church complex consists of the following [20]: -The church is a big basilica with side galleries.
-The structural system is a granite column of 8.5 m height and 88 mm diameter resting on a limestone isolated footings and limestone slaps and beams.
-Its length is 55 meters, span of nave is 14.7 m and depth of aisles is 5.6 m.
-The columnated transept has exedrae at both ends.These features make the basilica is one of the most important examples of early Christian architecture development.
-At the eastend there is a great apse of width equal to the nave span.
-The western part of the church was ended by esonarthex, which later completed by narthex.
-On its north and south sides there were unsymmetrically situated staircases which led to the side galleries.
-It is supposed that its ceiling is from wood.Presumably the basilica was 22 meters high.
-There were two entrances leading to the basilica, the main entrance from spacious atrium was situated on the western end of the church.The other one was the side entrance of situated on the north leading from the Antione Street through the four-column portico (the tetrastyle).
-The baptistery tank was situated on the north east corner of the church complex (Figure 5).
Materials and Methods
Several laboratory examination and analyses were carried out to identify the nature of the granite and limestone used to build the basilica church.About 40 samples represent the construction materials (granite and limestone) were collected from the fallen fragments.Nine thin sections were examined using polarizing light microscopy to identify the petrographic characteristics of these construction materials.XRD and XRF analyses were performed to identify the components and ratios of the elements in the installation granite and limestone;
The Analytical Study
The analysis process has been carried out upon different kind of construction materials for columns of basilica church and Ramses II/Nero temple.The construction materials are Limestone and granite where limestone brought from Minia formation and Granite from Aswan.The methods which used for analysis are (XRD) X-Ray diffraction, (XRF) X-Ray florescence, (RS) Raman Spectroscopy.
-Analysis of limestone and granite with XRD Limestone of the free-standing stone columns especially for crowns, bases analyzed by using XRD identifies the chemical composition of sample, the samples analyzed by central laboratories sector (the Egyptian minerals resources).There are many studies have been carried out on Minia formation as a source of limestone used in columns.For example, M. A. W. Gaber (2017) [21] showed his research XRD analysis for Minia Limestone and his results said that in Samlut area the XRD indicates the calcite mineral is the most predominant minerals of limestone ore (major mineral) with average ratio 99.5% 7 and Beni Khalid limestones are composed mainly of calcite, the XRD results complies with chemical composition analysis that the CaCO 3 is 99.30%.Chemical analysis carried out for samples collected from deteriorated fallen part of limestone column and the results indicates that the main composition is calcite mineral as shown in Table 1 and it is so important to say that the results is matching with (M. A. W. Gaber) results.
There are also many studies have been carried out such as (M. A. El-Gohary, 2011) [22] demonstrated the chemical composition of granite of Aswan by using XRD.The result of "M. A. El-Gohary" research on weathered granite taken from unfinished obelisk displayed that the components of red spots consist of different phases of minerals which can be distinguished as follows: Assembly of microcrystal of Sanidine [(K, Na)AlSi 2. Also, two representative samples prepared from Aswan quarry in missing obelisk area and analyzed by XRD as summarized in Table 3 and Table 4 -Elemental analysis of limestone and granite with XRF XRF is so important to identify the exact minor and major elements for limestone sample which collected from deteriorated fallen limestone.Also, there are many studies studied the Minia formation and made chemical analysis by using XRF for different areas in Minia as (Gaber M.A. Wahab, 2017) told us in his paper and The XRF analysis of collected limestone revealed that the major element is CaO 55.70; accordingly, the CaCo 3 is 99.65% and the archaeological sample results as shown in Table 5.
For granite, there are many studies carried out using XRF for granite from unfinished obelisk (M. A. El-Gohary, 2011) [22].He made chemical analysis for six samples from Aswan granite to know the majority elements of granite and cause of degradation.XRF carried out on archaeological samples comparing to the previous study on quarry.Table 6 shows the result of XRF for archaeological sample from weathered fallen parts of granite.
-Analysis of limestone and granite with Raman spectroscopy
The aim of this instrument is to identify the material and its components as shown in Figure 6 for limestone and Figure 7 the spectrum of archaeological granite sample and Figure 8 the quarry sample of granite from Aswan.The Raman spectroscopy which used in analysis is at BeniSuef University laboratory, VERTEX 70, Bruker Optics, Germany (Figure 9).
-Analysis of mortar 1) Analysis of mortar with XRD XRD has been used to identify the components of the archaeological mortar which had been used in archaeological building.The components of mortar are gypsum, calcite, quartz and basanite as summarized in Table 7 shows the results of XRD pattern for archaeological material.
2) Analysis of mortar with XRF This instrument has been used to identify the elements of archaeological mortar in El-Ashmounein archaeological site as shown in Table 8 the results of XRF for archaeological mortar.
Diagnosis and Examination of the Construction Materials
Investigation or examination of building materials is so important to assess the weathering state of the building materials and to evaluate the factors and mechanism of weathering in addition to that the current state of preservation the columns.The investigation and examination have been done by using the following techniques; polarizing microscope (PLM), Scanning electron microscope (SEM), and digital microscope.The investigation and examination had been used for archaeological granite, limestone and mortar.
-Investigation of limestone and granite with polarizing microscope (PLM/C.N) Polarized light microscopy provides us a unique window into the internal structure of crystals and at the same time is aesthetically pleasing due to the colors and shapes of the crystals.The use of PLM as a tool for crystallography extends back at least 200 years.For many of those years, it was the prime tool for examining the crystal properties of minerals and inorganic chemicals, as well as organics [23].The study revealed the samples are represented and described as the following: The rock is medium to coarse-grained showing granular, hypidiomorphic, perthitic and piokilitic textures.
Mineral composition
The rock is very fine to fine-grained and composed mainly of carbonates (mainly calcite) as the essential component associated with minor amounts of iron oxides and opaques and rare amounts of phosphate minerals, microcrystalline quartz and clay minerals.Carbonates (mainly calcite) represent the matrix of the rock and occur as very fine-grained, anhedral crystals interlocking to each other.Significant number of microfossils and shell fragments in different shapes and sizes are observed scattered in the matrix of the rock and filled by recrystallized carbonates (sparite) and some microfossils filled by micrite.Opaque minerals occur as very fine to fine-grained aggregates and scattered commonly at the boundaries of the microfossils.Phosphate minerals (collophane) occur in rare amount and scattered in carbonate matrix.Microcrystalline quartz occurs in rare amount and observed distributed in the rock.Few irregular pores are observed in the rock as shown in Figure 10 the samples investigated under polarizing microscope Crossed Nicole and the samples investigated under polarizing microscope (PLM) The rock is composed mainly of alkali feldspar (microcline & orthoclase perthite), quartz, plagioclase, biotite and hornblende together with accessory of titanite, zircon, apatite and opaque minerals as shown in Figure 12.Secondary minerals are represented by sericite, carbonates, clay minerals, chlorite and iron oxides.Alkali feldspar (microcline & orthoclase perthite) is the most abundant constituent of the whole rock.It is coarse-grained, generally anhedral crystals and slightly altered and turbid with clay minerals.Some crystals of feldspars are deformed and corroded due to mild deformation effect.Quartz is essential mineral constituent occurs as fine to coarse-grained, anhedral crystals and filling the interstitial spaces between feldspar crystals.It shows stretched crystals, corroded, granulated, curved boundaries and deformed due to deformation effect.Plagioclase is medium to coarse-grained, subhedral platy in form and shows distinct lamellar twinning and partially altered to sericite and carbonate minerals.It presents also as irregular lamellae, thin films and fine inclusions intergrowths in microcline & orthoclase perthite showing perthitic texture.It is also somewhat deformed and shows curved lamella.Biotite occurs as fine to medium-grained tabular, flaky crystals at the interstices of feldspars and quartz.It is strongly pleochroic; and partially to highly altered to chlorite with liberation of iron oxides at its cleavage planes and borders.Locally radioactive halos are noticed within biotite crystals (probably due to the presence of radioactive zircon).Hornblende occurs as fine to medium-grained tabular, prismatic crystals at the interstices of feldspars and quartz.It is commonly associated with biotite; and partially to highly altered to chlorite with liberation of iron oxides at its cleavage planes and borders.Titanite is very fine to fine-grained; subhedral to euhedral rhombic and/or acute crystals, mainly associated with altered biotite and hornblende (most probably as alteration product).Zircon is very rare, extremely fine-grained, euhedral in form and generally noticed in association with biotite and hornblende.
Images
Figure 10 Figure 12 Alteration Recrystallization of primary carbonate (calcite) detected in the microfossils and shell fragments to form a coarser grained calcite.Some pore spaces are detected in the sample due to dissolution of the original constituents such as fossils shell as shown in Figure 11.
The rock is affected by alteration and deformation over original constituents.Alkali and plagioclase feldspars are partially altered to sericite, carbonates and clay minerals.Mafic minerals (biotite and hornblende) are partially to highly altered to chlorite with liberation of iron oxides at its cleavage planes and borders.The rock affected by deformation and granulation over essential constituents especially in quartz and mafic minerals as shown in Figure 13.-Investigation of limestone and granite with Scanning electron microscope (SEM) with EDX.
Scanning electron microscope used to investigate the surface of archaeological limestone and its state of disintegration and alteration, this investigation is operated with elemental analysis to confirm the results of analysis studies and to identify the elements which caused the alteration for the stone.The specification of the instrument is Using SEM Model Quanta 250 FEG (Field Emission Gun) attached with EDX Unit (Energy Dispersive X-ray Analyses), with accelerating voltage 30 KV, magnification14x up to 1,000,000 and resolution for Gun.1n).
Limestone investigated under scanning electron microscope with EDX with various spots upon the samples as shown in Figure 14, where the results are; Ca, K, Si, O and C. The texture of stone sample is not homogenous with many pores as a result of chemical and mechanical weathering of stone.Granite also investigated under SEM with EDX to investigate the effect of chemical and mechanical weathering as shown in Figure 15.
The texture is very fine to coarse-grained, showing prophyrtic texture (coarse-grained of quartz and rock fragments enclosed in fine-grained matrix).
Figure 14.SEM image with EDX for area called e (Calcite crystal), where the results are; Ca, Na, Si, and O. the texture of stone sample is not homogenous with many pores as a result of chemical and mechanical weathering of stone which lead to disintegration and alteration.
Figure 15.SEM image with EDX, the granite is suffering from high disintegration and alteration, the elemental analysis shows that this part is composed mainly from quartz.
Considerable number of irregular pores and cavities are detected in the sample.Mineral composition of the sample is very fine-grained to coarse and composed of rock fragments, quartz, cemented by very fine-grained matrix of carbonate (mainly calcite) admixed with gypsum and/or basanite and minor amount of iron oxides associated with rare amounts of halite, feldspars and mafic minerals (biotite, muscovite and epidote) and opaque minerals.Rock fragments are represented by mainly carbonate fragments (calcite), chert and flint present and occurs as coarse to medium-grained of rounded to subangular outlines and scattered in very fine-grained of the sample matrix.Few microfossils are observed scattered in the rock fragment of calcite.Quartz and feldspars occur as medium to fine-grained of subrounded to subangular outlines and cemented by a mixture of very fine-grained matrix.Iron oxides and opaque minerals occur as fine to medium-grained scattered in the sample.Halite represents by traces enclosed in some pore spaces.Mafic minerals present as fine-grained, and observed in the matrix of the sample.Significant number of irregular pores in different shapes and sizes are scattered in the sample (Figure 16).Alterations of mortar where the essential components are affected by deformation shown by micro-cracks in quartz, rock fragments and also corroded boundaries and edges.
Figure 16.The archaeological building mortar under polarizing microscope and the sample is very fine-grained to coarse and composed of rock fragments, quartz, cemented by very fine-grained matrix of carbonate (mainly calcite) admixed with gypsum and/or basanite and minor amount of iron oxides associated with rare amounts of halite, feldspars and mafic minerals (biotite, muscovite and epidote) and opaque minerals.
Several pore spaces and cavities (vugs) are present due to alteration and dissolution over the essential components in the sample such as calcite and evaporites (bassanite & gypsum).Biotite and mafic minerals are partly to be highly affected by alteration.
-Investigation of mortar with Scanning electron microscope (SEM) with EDX.
The archaeological mortar has been investigated under SEM to investigate its morphology and make elemental analysis as shown in Figure 17.
Engineering Properties of Construction Materials
Generally, the measurements of physical properties seem to be the most important to shows the behaviour of stone and its characteristics after weathering and degradation as a result of chemical or mechanical weathering so the physical properties are changing from time to time according to the severity of weathering factors [24].In the archaeological site of El-Ashmonein there are different kind of building materials the most dominant are Granite and Limestone which used in as a building material for columns in the site.So, there are two kinds of stone with different origin and locality and different geotechnical properties.
Physical Characteristics of Limestone and Granite
Several geotechnical properties had been measured and tested such as specific gravity, apparent density, porosity, water absorption and seismic analyser test to identify the Compressional waves and shear wave velocity of granite and limestone.Ten cubic samples have been weighted naturally, after drying in 105˚.In the oven the samples inside the oven for drying, and also the author measured the saturated weight of samples as shown in Table 9 displays the weight average values of granite cubic samples, naturally, after drying every cycle 10 hours, and saturated every cycle 24 hours for limestone and Table 10 for granite and Table
and Laboratory Mechanical Testing
First: The nondestructive testing: 1) Seismic analyzer test (P-and S-waves (body waves) body (seismic) waves Seismic techniques, which are known as nondestructive geophysical methods, are commonly used by engineers working in various fields such as mining, civil, and geotechnical engineering.They are frequently employed to investigate certain properties of rocks [25].Tektronix TAS250 Digital Oscilloscope (seismic analyzer) at faculty of science, geophysical department, Cairo University had been used as a nondestructive technique to identify different geo physical and mechanical properties of limestone and granite, as shown in Figure 18.So, the samples tested to identify the compressional waves and shear wave velocity (seismic waves/body waves) of both granite and limestone as a construction material for the archaeological columns.The samples are cylinders diameter 5cm and Height 10 cm.The results of the seismic test for two samples from granite and two samples from limestone are shown in Table 13 granite, and Table 14 limestone.2) Schmidt hammer test Equotip had been used as a nondestructive method for assessment the mechanical characteristics, it is carried out on 48 free standing granite columns and their bases and crowns.Figure 19 shows the plan of basilica church and distribution of columns and their numbers for Schmidt hammer test.This study aims to Figure 18.Tektronix TAS250 Digital Oscilloscope (seismic analyzer).The instrument which used for identification of seismic waves for the stone samples, this test carried out at faculty of science, geophysics department, Cairo University.15 and Table 16.2) Flexural strength test Flexural strength is a measure of the tensile strength, it is useful in knowing the quality of natural stone by identifying the stress, strain and modulus of elasticity of naturals stone used in columns as seen in Table 17 and Table 18, also
Conclusions
The present study includes the main phase of integrated proposal for the conservation of the early basilica church from the geo-environmental and structural engineering views.After introducing the geological seism tectonic settings, a hazard map of Al-Ashmonein area is presented.It also includes the structural deficiencies and their detailed study, engineering parameters, probability of failure, and microzoning of the critical parts of the free standing columns.
This current study is so important for giving intensive information about construction materials used especially in basilica church.After description of the basilica church structurally and architecturally, the destruction for the whole archaeological site causes is explained.Also, this study includes the geochemical, geomechanical and geophysical characteristics.
Petrographically, recrystallization of primary carbonate (calcite) is detected in the microfossils and shell fragments to form a coarser grained calcite.Some pore spaces are detected in the sample due to dissolution of the original constituents such as fossils shell and the granite is affected by alteration and deformation over original constituents.Alkali and plagioclase feldspars are partially altered to sericite, carbonates and clay minerals.
The mechanical properties of the samples are medium poor and are all closely correlated.The construction materials under investigations have an important intrinsic sensitivity to weathering factors especially the static, dynamic actions, ground water and salt weathering effects.
Figure 1 .
Figure 1.Map of the old cities in Egypt, including El-Ashmounein archaeological site.
Figure 2 .
Figure 2. Cross section for Mellawy center by using GIS software including El-Ashmonein archaeological site in yellow point.
Figure 3 .
Figure 3.The current state of preservation of the archaeological site of El-Ashmounein.
Figure 4 .
Figure 4.The destructed granite columns as a result of ancient flash flooding and earthquakes activities.
Figure 5 .
Figure 5. Modelling of the basilica church with the plan of the reconstruction of basilica church at Al-Ashmounein archaeological site.
Figure 6 .
Figure 6.The spectrum of limestone.
Figure 7 .
Figure 7.The spectrum of archaeological granite sample.
Figure 8 .
Figure 8.The quarry sample of granite from Aswan.
Figure 10 .
Figure 10.The samples investigated under polarizing microscope Crossed Nicole and the samples investigated under polarizing microscope (PLM).
Figure 11 .
Figure 11.Recrystallization and alteration of primary carbonate (calcite) detected in the microfossils and shell fragments to form a coarser grained calcite.
Figure 13 .
Figure 13.Alkali and plagioclase feldspars are partially altered to sericite, carbonates and clay minerals and Mafic minerals (biotite and hornblende) are partially to highly altered to chlorite with liberation of iron oxides at its cleavage planes and borders.
Figure
Figure 17.SEM image with EDX, where we can find high disintegration and alteration as a resul of mechanical and chemical weathering.theelemets are Na, Si, Ca, Mg, Cl, K, Al and C.These elemnts indicates that there is an impact for salt weathering on mortar components.
Figure 17.SEM image with EDX, where we can find high disintegration and alteration as a resul of mechanical and chemical weathering.theelemets are Na, Si, Ca, Mg, Cl, K, Al and C.These elemnts indicates that there is an impact for salt weathering on mortar components.
Figure 19 .
Figure 19.The plan of basilica church and distribution of columns and their numbers for Schmidt hammer test.
Figure 21 :
Plotting the Schmidt hammer rebound number (RN) versus the Uniaxial Compressive Strength of limestone.Figure 22: Plotting the Schmidt hammer rebound number (RN) versus the Uniaxial Compressive Strength of granite.Second: The destructive testing 1) Compressive strength test Three limestone and three granite samples of construction material of columns were prepared to carry out the uniaxial unconfined compressive strength of intact core samples diameter 5 cm and height 10 cm as seen in Table Figure 23 shows preparation of the samples for mechanical testing and ultrasonic testing.The mechanical testing results for limestone and granite are shown in Figure24and Figure25.
Figure 20 .
Figure 20.The researchers during using Equotip for granite columns in the site.
Figure 21 .
Figure 21.Plotting the Schmidt hammer rebound number (RN) versus the Uniaxial Compressive Strength of limestone.
Figure 22 .
Figure 22.Plotting the Schmidt hammer rebound number (RN) versus the Uniaxial Compressive Strength of granite.
Figure 23 .
Figure 23.Preparation of the samples for mechanical testing and ultrasonic testing.
Figure 24 .
Figure 24.The relationship between stress/strain for granite.
Figure 25 .
Figure 25.The relationship between stress/strain for limestone.
Figure 26 .
Figure 26.The relationship between stress/strain by Flexural tensile test for granite.
Figure 27 .
Figure 27.The relationship between stress/strain by Flexural tensile test for limestone.
Table 1 .
. Chemical compounds of limestone from columns by XED.
Table 2 .
Results of XRD analysis for archaeological granite taken from weathered fallen granite.
Table 3 .
Results of XRD analysis for Quarry Aswan of granite S1.
Table 4 .
Results of XRD analysis for quarry Aswan of granite S1.
Table 7 .
The results of XRD pattern for archaeological material.
Table 9 .
The weight average values of granite cubic samples, naturally, after drying every cycle 10 hours, and saturated every cycle 24 hours.
Table 10 .
Different physical properties of collected granite samples.
Table 11 .
The weight average values of limestone cubic samples, naturally, after drying every cycle 10 hours, and saturated every cycle 24 hours.
Table 12 .
The results of physical properties for the limestone samples.
Table 13 .
Compressional waves and shear wave velocity of granite cylindrical samples.
Table 14 .
Compressional waves and shear wave velocity of limestone cylindrical samples.
Table 15 .
Mechanical testing results for granite.
Table 16 .
The mechanical test results for limestone.
Table 17 .
The test results of flexural strength for granite.
Table 18 .
The test results of Flexural strength for granite. | 8,397.2 | 2019-03-15T00:00:00.000 | [
"Geology"
] |
Patient-specific finite element modeling of scoliotic curve progression using region-specific stress-modulated vertebral growth
Purpose This study describes the creation of patient-specific (PS) osteo-ligamentous finite element (FE) models of the spine, ribcage, and pelvis, simulation of up to three years of region-specific, stress-modulated growth, and validation of simulated curve progression with patient clinical angle measurements. Research Question: Does the inclusion of region-specific, stress-modulated vertebral growth, in addition to scaling based on age, weight, skeletal maturity, and spine flexibility allow for clinically accurate scoliotic curve progression prediction in patient-specific FE models of the spine, ribcage, and pelvis? Methods Frontal, lateral, and lateral bending X-Rays of five AIS patients were obtained for approximately three-year timespans. PS-FE models were generated by morphing a normative template FE model with landmark points obtained from patient X-rays at the initial X-ray timepoint. Vertebral growth behavior and response to stress, as well as model material properties were made patient-specific based on several prognostic factors. Spine curvature angles from the PS–FE models were compared to the corresponding X-ray measurements. Results Average FE model errors were 6.3 ± 4.6°, 12.2 ± 6.6°, 8.9 ± 7.7°, and 5.3 ± 3.4° for thoracic Cobb, lumbar Cobb, kyphosis, and lordosis angles, respectively. Average error in prediction of vertebral wedging at the apex and adjacent levels was 3.2 ± 2.2°. Vertebral column stress ranged from 0.11 MPa in tension to 0.79 MPa in compression. Conclusion Integration of region-specific stress-modulated growth, as well as adjustment of growth and material properties based on patient-specific data yielded clinically useful prediction accuracy while maintaining physiological stress magnitudes. This framework can be further developed for PS surgical simulation.
Introduction
Adolescent idiopathic scoliosis (AIS) is a complex threedimensional (3D) deformity of the spine defined by a progressive frontal plane curvature and axial rotation affecting 1-3% of [10][11][12][13][14][15][16] year-olds in the US [1]. This condition has been associated with a perception of physical limitation and a decrease in self-esteem and body image [2]. The formation of lateral spine curvature observed in AIS is associated with alterations in the stress profiles across vertebral epiphyseal growth plates, which has been shown to alter local growth rates [3]. The Hueter-Volkmann law, the guiding principle of growth modulation in AIS spine, states that growth is stimulated in relative tension and inhibited in relative compression [4]. This law was validated across multiple species and anatomical locations, and has been shown to produce predictable alterations in bone growth when a known loading regimen is applied [5]. Quantifying the relationship between applied stress and resulting growth in the human pediatric scoliotic spine to predict curve progression on a patient-specific basis can assist in the optimization of the type and timing of intervention.
Finite Element (FE) methods using beam elements have been widely used to model asymmetric stress to simulate progressive AIS [6][7][8][9][10][11][12]. More recently, FE models consisting of both volumetric (tetrahedral or hexahedral) and beam elements have been reported, allowing for more 1 3 detailed analysis of stress applied to the vertebral epiphyses [12,13]. Compressive stresses in these volumetric models, based on gravity and muscle stabilization forces, determine growth modulation of vertebral body height [13]. While these models were validated with longitudinal clinical data from patients with progressive scoliosis, the precision of their predictions is limited by simplifications of vertebral growth that do not consider direction-and region-specific variations in growth rates for all anatomical regions of the vertebrae, including the posterior regions, through which 3-25% of longitudinal-axis compressive stress may be transferred [14,15]. Recently published comprehensive data on region-specific normative pediatric vertebral morphology and size-invariant shape [16][17][18] have been used to validate region-specific orthotropic growth in a normative 10-yearold T1-L5 spine FE model [15]. Such methods and data could be used to incorporate asymmetric stress-modulated orthotropic region-specific growth in scoliotic spine FE models.
Maximizing the accuracy of material property assignment also plays a crucial role in simulation of stress-modulated growth; parameters such as intervertebral disc (IVD) elastic modulus and ligament stiffness may alter stress distribution at the vertebral epiphyses. While linear elastic material properties for vertebrae, intervertebral discs, and ligaments, have been implemented in several prior models, a recent study from our group used age-and level-specific non-linear mechanical properties for the spinal ligaments to improve model biofidelity [15,[19][20][21]. Additionally, since pediatric tissue properties are not widely available, age-based scaling methods have been applied to level-specific properties obtained from adult cadaveric specimen [22,23]. However, no study has implemented age-and level-specific material properties in an osteo-ligamentous pediatric scoliotic spine FE model which incorporates region-specific, stress-modulated growth.
A longitudinal study from our lab illustrated significant differences in vertebral growth patterns between normative and scoliotic vertebrae in skeletally immature rabbits [24]. While comprehensive reporting of normative vertebral shape and morphology with growth has been reported, similar data are not available for the scoliotic vertebral growth in humans [16,18,25]. Additionally, variations (biological and/or intersubject) in several prognostic factors including age, sex, weight, skeletal maturity, and spine flexibility, contribute to variable scoliotic curve progression [3,[26][27][28]. These prognostic factors have not yet been integrated together in an FE modeling framework of AIS with region-specific, stressmodulated vertebral growth; therefore, such an integration would significantly improve patient-specific modeling and prediction of curve progression [13,29].
The objective of the current study is to simulate and validate region-specific stress-modulated vertebral growth in patient-specific scoliotic spine FE models which integrate age, sex, weight, skeletal maturity, and spine flexibility. Such FE models can serve as a tool to predict curve progression, and can also aid in decision making for clinical intervention. The current study will build upon our previously published work on pediatric patient-specific FE modeling and growth [15,30].
AIS patient selection
After institutional review board approval, frontal and lateral low-dose bi-planar radiographs (EOS Imaging Inc, Paris) of 264 AIS patients were obtained from the Shriners Hospitals for Children, Philadelphia, PA, USA. The inclusion criteria for patient selection were: male and female AIS patients with skeletal maturity of Risser 0-4, with three years of follow-up (2.65 ± 0.30 years: 914, 1106, 1031, 994, and 785 days) at six-month intervals (average deviation of 31 ± 21 days), and either not braced or not brace-compliant. From this cohort, five AIS patients were selected, each having a unique Risser score of 0 through 4, respectively. These patients were either unbraced (three patients-1, 3, and 5), or non-compliant with brace wear (two patients-2 and 4), such that bracing effects were considered negligible. Table 1 shows a patient cohort that ranges in age, sex, Lenke type, Risser sign, initial and final Cobb angles, and spine flexibility, quantified via spine flexibility ratio (SFR), defined in Eq. 1.
Vertebral geometry reconstruction from bi-planar radiographs and finite element mesh generation
A total of 153 vertebral landmark points (nine per vertebra) were selected in the frontal and sagittal radiographs and registered by triangulation using a custom code (MATLAB 2020b, MathWorks, Natick, MA) [15,31]. 3D reconstructions of vertebrae using the landmark points were validated by comparing 3D landmark point locations obtained from frontal and lateral radiographs to those obtained directly from a chest CT scan. The average 3D reconstruction surface deviation was 3.0 ± 2.2 mm. Prior PS reconstruction methods reported similar accuracies [30,32].
For all subjects, after landmark point registration, PS-FE models were generated using a previously reported dualkriging method, which morphs a normative FE spine model template ( Fig. 1) to PS spine geometry based on vertebral landmark points (Fig. 2) [30,33,34]. The normative FE template utilized was a hexahedral FE model of a 10-yearold osteo-ligamentous thoracic and lumbar spine (T1-L5 with IVDs), ribcage, and pelvis with age-and level-specific ligament properties and orthotropic region-specific vertebral growth [15]. The costo-vertebral joints were modeled as beams constrained to tension-compression, and the pelvis was modeled as one unified structure, therefore no sacroiliac joint was separately modeled. This model consists of eightnode hexahedral elements with element side lengths ranging from 2 to 4 mm. After morphing the FE template model to each PS geometry, mesh quality was compared to previously established acceptability criteria metrics [30,35].
Assignment of patient age-and level-specific material properties
All anatomical structures aside from the ligaments (i.e., cortico-cancellous bone of the vertebrae, ribs, sternum, and pelvis, IVD, and costal, costo-vertebral, and transverse joint cartilage) were modeled as linear elastic materials, while all spinal ligaments were modeled as tension-only springs. Age-and level-specific material properties were assigned to the anatomical structures in each PS-FE model, shown in Table 2. The values for each mechanical property were obtained from published adult values which were adjusted based on chronological age-based scale factors [22,[36][37][38][39][40]. A comprehensive table of level-specific material properties of spinal ligaments are reported in the Appendix of previous work from our lab [15]. Furthermore, to increase patientspecificity of the FE model, IVD elastic modulus was scaled linearly with PS SFR (Eq. 1): where SFR = Spine Flexibility Ratio, CAS = Cobb Angle while Standing (at Final Timepoint), and CALB = Cobb Angle during Lateral Bending into Convexity (at Final Timepoint).
And the patient-specific, scaled elastic modulus of IVD, E scaled , was determined using Eq. 2: where E baseline is the baseline elastic modulus of IVD (Table 2), and C is a constant, solved with iterative error minimization to be 0.3 [28].
Asymmetric stress-modulated growth implementation
Vertebral growth was modeled through adaptation of a region-specific orthotropic thermal expansion method described by Balasubramanian et al. [15]. This method allocated 663 thermal expansion coefficients (for 17 vertebral levels × 13 regions × 3 directions), which were initially assigned values corresponding to normative ageand sex-based vertebral growth strains calculated with values obtained by Peters et al. [16,17]. To represent the stress-modulated asymmetric growth that is observed in AIS patients, normative thermal expansion coefficients for vertical growth in vertebral bodies were scaled using compressive stress in the corresponding IVD region (i.e., anterior left vertebral body growth is scaled based on compressive stress in anterior left IVD region). Each growth rate was updated once per time interval documented for each patient. To achieve this scaling, a linear relationship between stress and growth rate was utilized, defined by Stokes et al. [3]. In Eq. 3, G is resultant growth (mm), G m is baseline growth under normative stress (mm), is stress sensitivity (MPa −1 ), is the altered stress (MPa), and m is the baseline stress (MPa). The value for was initially defined as 0.4 MPa −1 as reported by Shi et al. (2009), and decreased linearly according to Risser sign by the relationship = 0.4 − (RisserSign × 0.08) , such that as a patient approached Risser 5 (i.e., skeletal maturity), would approach zero [41].
Boundary conditions (contacts and constraints), and loading conditions (gravitational force)
The pelvis was constrained in all translation and rotation directions, while the T1 vertebra was limited only to vertical translation. The articulating surfaces between facets were defined as sliding contacts with zero friction and exponentially increasing penalty force normal to any penetrating nodes until a maximum penetration Pelvis --E = 5000 [63] depth (automatic LS-DYNA default, ex: half the element side length) was reached, while the interfaces between IVDs and vertebral endplates were defined as tied contacts. Gravity was applied through an adaptation of a method by Clin et al. (2011), wherein global vertical 'anti-gravity' forces were applied in the upward direction, all stresses were reset, and gravity forces of the same magnitudes were then applied in the downward direction, pairing spine positioning (standing) with a corresponding stress profile [26]. The magnitude of each force vector was determined based on percentage of bodyweight acting at each level, and applied from the geometric centroid of each vertebra. After gravity application, the total principal compressive stresses in each region were computed.
Validation of spine curvatures and vertebral wedging with growth
Each patient-specific FE model was validated with measures of thoracic and lumbar Cobb angles, kyphosis, lordosis, axial rotation, and wedging of the apical vertebral body, at the time of reconstruction and at each subsequent yearly timepoint up to three years. A custom MATLAB code was created to extract these measurements at each reported timepoint, manually from each patient radiograph and automatedly from each patient-specific FE model. While mean inter-observer error in Cobb angle measurement on X-rays is generally reported as 3-5°, absolute error value threshold between radiograph-obtained measurements and those extracted from each PS-FE model was set at 8° for all angles, based on a reported 95% confidence interval for human error involved in manual angle extraction from radiographs [42,43].
Results
PS osteo-ligamentous FE models with ribcage and pelvis, with age-and level-specific material properties were generated for five AIS patients, stress-modulated growth was simulated at each radiograph acquisition timepoint, and FE simulation results were compared to data extracted from patient radiographs at each corresponding timepoint. For all FE models generated, each containing 307,564 hexahedral elements, mesh quality met the following previously reported acceptance criteria: Jacobian ≥ 0.
Spine curvatures
Average errors (defined as absolute values of error between FE model and X-Ray measured angle) in thoracic and lumbar Cobb angles were 6.3 ± 4.6° and 12.2 ± 6.6°, respectively, and average errors in kyphosis and lordosis angles were 8.9 ± 7.7° and 5.3 ± 3.4°, respectively (Fig. 3, Table 3).
Vertebral wedging
Wedging angle between the superior and inferior faces of the apical and two adjacent vertebral bodies was predicted to within an average error of 3.2 ± 2.2° (Table 4). Furthermore, error in change in wedging angle per year was 0.99 ± 0.95°. The difference between PS-FE models and patient X-Rays is shown in Fig. 4.
Stress in IVDs
Average principal stress in each IVD region for the apical vertebral level (IVD below apical vertebra) is shown in Table 5 for each patient-specific FE model.
Discussion
This is the first study to implement orthotropic region-specific stress-modulated growth in patient-specific FE models of the osteo-ligamentous T1-L5 spine, ribcage, and pelvis, with vertebra, ribcage and pelvis comprised entirely of volumetric elements, and with age-and level-specific material properties. The absolute error values in clinical indices measured during and after three years of simulated growth were under 8°, 10°, and 15° in 60%, 72%, and 83% of angular measurements, respectively, across all patients and timepoints. For thoracic or thoraco-lumbar Cobb angle in particular, these percentages were 63%, 84%, and 90%, respectively. Yearly main thoracic or thoraco-lumbar curve progression rate was assumed to vary in each FE model with age, sex, Lenke type, Risser sign, initial and final Cobb angles, and SFR, and yearly FE model curve progression averaged 3.2 ± 5.4°/year. The observed FE model Cobb angle progression rates were considered realistic based on yearly curve progression rates observed in both the patient radiographs accessed for the current study, and on report of median AIS curve progression rate being approximately 7°/year [46]. Previous reports of apical vertebral wedging in silico from 0.6 to 1.4°/year corroborate the results of the current models. The corresponding average values in the current study ranged from 0.5 to 0.8°/ year at the apical and immediately adjacent levels [29]. Furthermore, average vertebral column stress ranged from 0.11 MPa in tension to 0.79 MPa in compression, similar to the stress range of 0.3 MPa in tension to 0.7 MPa in compression reported by Clin et al. (2011) [26], though it should be noted that the prior study reported stresses for the entire T1-L5 spine, while the stress magnitudes reported in the current study consider only the apical and adjacent levels. Together, these results suggest that the current modeling approach can predict alterations in spine geometry and related vertebral wedging within the patient cohort with clinically useful accuracy. In previous work from our lab, a sensitivity analysis on vertebral growth that varied ligament stiffness, IVD elastic modulus, bone elastic modulus, and thermal expansion coefficient was performed, indicating that IVD elastic modulus had the greatest effect on average stress magnitude measured in the IVD, and thermal expansion coefficient had the greatest effect on vertical vertebral body strain [33,34]. Since thermal expansion coefficients cannot be varied directly, stress sensitivity was chosen as its proxy. Hence, stress sensitivity and IVD elastic modulus were selected for scaling in the current modeling framework. To assess the effects of scaling these two parameters on model outcomes, simulations of spine growth for one exemplar patient (patient 3) were also performed with these two parameters held constant independently. When each of these two parameters were held constant (i.e., not scaled), progression of thoracic Cobb, lumbar Cobb, kyphosis, lordosis, and axial rotation angles were affected. When stress sensitivity was not scaled, 0.59 ± 0.32, 0.54 ± 0.47, 1.09 ± 0.89, 0.24 ± 0.20, and 0.15 ± 0.13 degree differences occurred, respectively, and when IVD elastic modulus was not scaled 0.45 ± 0.23, 0.40 ± 0.31, 0.73 ± 0.71, 0.27 ± 0.20, and 0.24 ± 0.12 degree differences occurred, respectively. These sensitivity analyses justify the utility of scaling stress sensitivity based on patient skeletal maturity, and IVD elastic modulus based on patient flexibility.
The current FE model is limited by being solely osteo-ligamentous-lungs, muscles, and connective tissue beyond the intervertebral spinal ligaments were not included. These features, if included in future analyses, may alter the stress environment of the spine as a whole, and therefore alter the calculated stress sensitivity that would accurately represent curve progression. However, the stress sensitivity parameters calculated in the current FE model may be considered to compensate for its limited scope. Second, due to the relative coarseness of the Risser sign (0-5, in steps of 1) to chronological age (continuous), Table 4 Average error in frontal vertebral wedging angle of the apical and adjacent above and below levels, between the current PS-FE models and patient X-Rays ITP ITP + 1y ITP + 2y ITP + 3y All TPs Apex + 1 (°) 2.9 ± 2.1 2.9 ± 2.4 3.0 ± 2.3 3.9 ± 3.6 3.2 ± 2.7 Apex (°) 4.1 ± 1.7 2.9 ± 1.5 3.1 ± 1.5 3.4 ± 2.0 3.4 ± 1.8 Apex − 1 (°) 2.3 ± 2.1 3.0 ± 1.8 3.5 ± 1.7 2.4 ± 1.7 2.8 ± 1.9 Fig. 4 Comparison of change in wedging angle per year (in degrees) for the apical and adjacent levels, between FE models and patient X-rays. Mean and standard deviation (bars and error bars), as well as individual patient data (points) are supplied as described in the figure legend scaling of stress sensitivity based on a correlation between Risser sign and chronological age may contribute to error in prediction of curve progression. This limitation could be addressed by correlating a more precise (Sanders score) or even continuous scale (Collagen X biomarker levels) of skeletal maturity with chronological age, and using this relationship to more accurately scale stress sensitivity for each patient [47][48][49]. Third, IVD elastic modulus was scaled from normative data for all simulated timepoints based on spine flexibility obtained at the final timepoint (approximately 3 years after ITP), where in reality, IVD elastic modulus may vary between normative and AIS patients, and spine flexibility may not remain constant over this timespan [50]. Future studies may define IVD material property differences between normative and AIS patients.
Additionally, IVD elastic modulus may be updated at each timepoint according to spine flexibility ratio by obtaining lateral bending imaging at those respective timepoints. Fourth, the growth of the ribcage and pelvis were not simulated in the current study. The purpose of the ribcage and pelvis in the current model is to accurately represent more biofidelic and anatomically correct loading and boundary conditions. Inclusion of the ribcage and pelvis would also be essential to assess range of motion of the model. The model could be improved in the future by implementing growth of other anatomical structures. Fifth, the effect of activity levels resulting from various exercise regimens on both reducing curve progression or decreasing an existing Cobb angle is considered significant based on a review of 19 publications [51]. However, the activity levels of the patients included in this study are unknown. Sixth, stress effects on growth introduced by bracing were not accounted for, though brace wear compliance was reported to be low in the clinical notes and therefore considered negligible [52]. Further improvements to the modeling framework can consider vertebral stresses induced by externally applied loads from bracing. No patient activity level data were obtained, which may act as a source of prediction error. Inclusion of such data may prove to be a significant feature in precise and accurate curve progression. Seventh, while sex-specific normative baseline growth rates have been established, these baseline growth rates were assumed to apply to AIS patients. Eighth, genetic factors were not integrated in the current FE modeling approach. Numerous factors have been shown to correlate with AIS development and progression, and therefore future modeling framework development may benefit by integrating such relationships [53,54]. Ninth, no sex-specific stress sensitivity or rate-ofchange of stress sensitivity has been established [55]. While the effect of hormone levels has been shown to alter growth plate activity, longitudinal data on hormone levels in AIS patients is not typically collected, but may improve prediction accuracy [56]. Additionally, higher prediction errors of compensatory curve progression as compared to that of the primary curvature may be attributed to currently used optimization methods for stress sensitivity scaling, which only minimized error in primary curve progression. Error in compensatory curve progression prediction could be reduced in future studies by implementing alternative stress sensitivity scaling optimization methods. Lastly, no reduction in kyphosis was observed during curve progression in the current FE models, where such reduction was observed in a patient cohort [57]. In future analysis, this unmatched trend may be addressed through either compensatory force application to represent any currently missing model scope, or through altering sagittal growth modulation characteristics.
The modeling techniques developed in the current study may provide improved insights into the prediction of scoliotic curvature progression in AIS patients. This curve progression prediction utilizing increased detail of both local growth and internal stress can be developed further to assist in surgical timing, planning, and prognosis for growth-modulating interventions such as anterior vertebral body tethering (AVBT). Future models may benefit from accounting for effects of active muscle loading, overall thorax stiffness, including other structures such as lungs, diaphragm etc., and a more precise classification of deformity [58][59][60][61]. Additionally, anatomically complete FE models of progressive AIS can serve as a foundation for testing growth-modulation devices, as current human cadaveric spines or surrogates cannot mimic the growth patterns and biomechanical responses observed in children and adolescents, as well as to do so with large animal models used for device testing [24,62,63]. In a broader sense, predictive biomechanical models such as that developed in the current study can contribute to advancements in precision medicine and optimized clinical outcomes [64].
Authors' contributions CRD: conception, methodology, software development, data analysis, interpretation, and visualization, writing-original draft and editing, AFS: methodology, data curation and analysis, writing-critical review and editing, approval of version to be published, SB: conceptualization, methodology, data curation and analysis, writing-original draft, critical review, and editing, approval of version to be published.
Funding Have no funding sources to disclose, and agree to be accountable for the submitted work.
Data availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflict of interest
The authors declare no conflicts of interest.
Ethical approval Patient data utilized in the current study was obtained with institutional review board approval obtained prior to conduct of the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 5,623.8 | 2023-01-03T00:00:00.000 | [
"Medicine",
"Biology"
] |
Comparative Characteristic of Toxicity of Nanoparticles using the test of Bacterial Bioluminescence
This article is devoted to the study of the luminescence intensity of the recombinant strain of Escherichia coli with cloned gene luxCDABE of the natural marine microorganism Photobacterium leiongnathi when exposed to 20 equimolar concentrations (4 M – 6×10-6 M) of nanoparticles of zinc and copper along with their alloys and mixtures. The levels of toxicity based on the EU50 values decreased in the following order: Zn → CuZn (alloy) → Cu-Zn (mixture) → Cub → Cua. Small-sized nanoparticles of zinc and copper had the most severe toxic effects. Nanoparticles of Cu-Zn alloy and mixture have intermediate toxicity due to the leveling of the toxic effect of zinc and its potentiation of copper and provide evidence for the possibility of creating innovative antibacterial drugs based on nanoparticles of copper and zinc because of their low toxicity and prolonged periods of antibacterial effect. The combined use of zinc and copper reveals the potential of antimicrobial copper to reduce the toxic effects of zinc. Acting as a functional antagonist of copper, the presence of zinc in the nanoparticle inhibits the peroxidation of lipids and generates antioxidant activity, which seems promising.
The intensive development of nanotechnology related to the discovery of the unique properties of nanoparticles indicates the high potential for their wide application in medicine, biology and other fields 1,2,3 .However, one factor that inhibits the application of nanoparticles is the evaluation of their biological safety 4,5,6 .At present, many works are devoted to the analysis of the toxic effects of nanoparticles, particularly metal nanoparticles, and the cytotoxicity of silver and zinc nanoparticles has been established 7,8,9,10 .Although the safety of certain nanomaterials has already been assessed [11][12][13][14][15] , information on nanoparticle toxicity, including the effects of size, dose and time interval of existence, for different nanoparticles is not sufficient.The lack of a detailed evaluation of biological safety in parallel with the proven possibility of their use highlights the need to solve these problems.One universal tool in achieving these goals is a bioluminescent method of analysis using luminescent bacteria 16 .This method combines different types of sensitive cellular structures with rapidity, objectivity and quantitative nature of the detected response to the assessed influence [17][18][19][20] .In this context, the objective of this study was to comparatively study the toxicities of copper and zinc nanoparticles and their alloys using inhibition test of bacterial bioluminescence (Escherichia coli).It is recommended for medical and biological evaluation of nanomaterials by the current national standard 21,22 .
MATERIALS AND METHODS
The genetically engineered luminescent strain E. coli K12 TG1 was used; this strain was engineered to constitutively express the luxCDABE genes of the natural marine microorganism Photobacterium leiognathi 54D10 and was produced by Immunotech (Moscow, Russia).In prior studies, the strain Escherichia coli K12 TG1 was restored by the addition of chilled distilled water.The suspension of bacteria was maintained at +2-4 0 C for 30 min, after which the temperature of the bacterial suspension was brought to 15-25 0 C.
The inhibition of bacterial luminescence was tested by placing the cells in 96-well plates containing the test substance and the suspension of luminescent bacteria in a 1:1 ratio.Subsequently, the tray was placed in the measuring unit of an Infinite PROF200 microplate analyzer (TECAN, Austria), which dynamically registered the luminescence intensity for 180 min at intervals of 5 min.
The effects of the nanomaterials on the intensity of bacterial bioluminescence (I) were evaluated using the formula: ... (1) where Ik and Io are the illumination intensities of the control and experimental samples, respectively, from the 0-th and n-th minutes of measurement.Three threshold levels of toxicity are taken into account: 1.
equal to or greater than 50 -sample toxic (luminescence quenching ³ 50 %).Commercial samples of metal nanoparticles from Advanced Powder Technology, Russia were used (Table 1).
The metal nanoparticle samples were assessed (particle size, polydispersity, volume, quantitative content of fractions, surface area) by electron scanning, transmission and atomic force microscopy microscopy using the following equipment: a LEX T OLS4100, a JSM 7401F and a JEM-2000FX, respectively («JEOL», Tokyo, Japan).
The size distribution of particles was investigated using a Brookhaven 90Plus /BIMAS and ZetaPALS Photocor Compact (Russia) in lysols after dispersing the nanoparticles using an ultrasonic disperser UZDN-2T (Russia) at f-35 kHz, N 300 W, and A-10 µa for 30 min.The toxic effects of the metal nanoparticle samples were evaluated under a wide range of equimolar concentrations (4 -6×10 - 6 M).
All experiments were done in triplicate and processed by variation statistics using the software package Statistika V8 (StatSoft Inc., USA).
RESULTS
The results obtained characterize the dynamics of inhibition of bacterial bioluminescence over time and demonstrate pronounced dependence on the nature of the investigated nanomaterials along with their forms and concentrations.
Characterization of the toxicities of copper nanoparticles with different sizes
The contact of E. coli with increasing concentrations of Cu a nanoparticles in the range of 0.1 to 0.05 M leads to a complete suppression of illumination in the test object in the first 50-70 minutes of contact.This could be interpreted as a manifestation of the severe acute toxicity of the substance (Figure 1A).Subsequent dilutions of suspensions of Cu a nanoparticles at concentrations ranging from 0.025 to 0.000625 M did not completely suppress bioluminescence.Luminescence inhibited (50-70%) after 130...180 min of contact with the microorganism.As a result, the concentration of 0.003125 M caused only 20-30% luminescence quenching after 175 min, demonstrating a weak toxic effect.Breeding from 0.00156 to 0.000195 M had no effect on the values of bioluminescence in comparison with the control sample, indicating a non-toxic dose.
The development of similar toxic effects for Cu b nanoparticles required more contact time with the test organisms at the same concentrations (Figure 1B).Thus, suspensions of copper nanoparticles (Cu b ) at 0.1 and 0.05 M (P <0.05) led suppressed luminescence after 70 and 110 min of contact.Unlike Cu a nanoparticles, 0.0125 M Cu b nanoparticles also resulted in the almost complete suppression of luminescence after 150 min of contact.Suspensions of 0.00625 M Cu b nanoparticles caused a 50-70% inhibition of bioluminescence, whereas 0.003125 and 0.001563 M (P <0.05) concentrations resulted in relative inhibitions ranging from 20 to 40% (practically nontoxic).Further dilution from 0.000781 to 0.000195 M had no significant effect on bacterial bioluminescence in comparison with the control.The dynamics of the suppression could be interpreted as a manifestation of severe acute toxicity of this sample.The copper nanoparticles with a pronounced toxic effect had sizes of approximately 55 nm.
Characterization of the toxicities of mixtures and alloys of Cu and Zn nanoparticles
Compared to the control, Cu-Zn (alloy) nanoparticles completely inhibited luminescence at concentrations ranging from 0.1 to 0.003125 M in the first 10 to 60 minutes of contact and at concentrations from 0.00156 to 0.000781 M (P <0.05) after 110 to 150 minutes of contact (Figure 2A).
Cu-Zn (alloy) nanoparticle suspensions diluted to concentrations ranging from 0.000391 to 0.00009 M (P <0.05) produced 50-80% quenching of the luminescence of the bacteria, showing thorough toxic properties after 130-160 min of contact.The Cu-Zn (alloy) nanoparticles had no toxic effect at a concentration of 0.00004 M throughout the measurement time.
In contrast to the Cu-Zn (alloy) nanoparticles, the mixture of Cu-Zn prepared with the same proportion of elements in some doses was significantly more toxic and characterized by complete suppression of luminescence of bacteria in a short period of time (from 10 to 25 minutes) at much lower concentrations from 0.1 to 0.000391 M (P<0.05)compared with control (Figure 2B).For concentration of at 0.000195 M, 50 to 70 % inhibition of bacterial luminescence and an average level of toxicity were observed.However, the subsequent dilution of the mixture of nanoparticles of Cu-Zn to concentrations ranging from 0.00009 to 0.00004 M did not cause significant changes in bioluminescence compared to the control values; these mixtures were not toxic.
Characterization of the toxicity of zinc nanoparticles
Zinc nanoparticles at concentrations ranging from 0.1 to 0.000195 M (P <0.05) completely suppressed the luminescence of bacteria compared
Dependence of dose-response
The above results were used to construct dose-response curves (Figure 4) for each of the tested nanoparticle samples.The EU50 values corresponding to the molar concentrations causing 50% inhibition of bacterial bioluminescence compared with the control at different durations of exposure were also determined (Table 2).
The results show that copper nanoparticles with different sizes clearly have different toxicities.Cu a nanoparticles are one order of magnitude less toxic than Cu b nanoparticles (Table 2).A 50% inhibition of bioluminescence requires a two-fold higher concentration of Cu a nanoparticles compared to Cu b nanoparticles.
According to the results of the EU50 calculation (after the contact with Cu-Zn mixture) the minimum concentrations causing 50 % inhibition of luminescence were determined after 60 min (0.0002 M).It is 2.2 times less in relation to the alloy.If the period of contact is longer (up to 180 min), indicators EU50 match.A peculiar biological activity of the Cu-Zn nanoparticles alloy (alloy) was characterized by EC50 values after 60 min of contact; they were 0.00009±of 0.00002 M.These values do not change after the increase of contact time.So, the activity of this alloy is less toxic compared to the mixture of Cu-Z nanoparticles.
The analysis of the EU50 values showed that the biological activity of zinc nanoparticles manifests at a minimal concentration of 0.00004 M to obtain a 50% quenching of luminescence bioluminescence at all stages of contact.
Based on similar calculations of EU50, the levels of toxicity of the different nanoparticle samples to the genetically engineered luminescent strain of E. coli decrease in the following order: Zn →Cu-Zn (alloy) → Cu-Zn (mixture) → Cu b → Cu a .
DISCUSSION
The main parameters that determine the toxicities of nanoparticles include chemical composition, size and shape 23 .In this study, nanoparticles with dimensions ranging from 55 to 97 nm were studied.
By comparing two different samples of copper nanoparticles, the smaller-sized nanoparticles were found to be more toxic.The biological activity of the nanoparticles increased with decreasing particle size.According to literature data, these changes are explained by differences in the properties of individual particles and their clusters, the degree of correlation of the geometric structure and the electronic structural shell in the interaction with the biological object 24 .A similar dependence of biological activity on nanoparticle size has been described for a number of nanoparticles [25][26][27][28] .
Thus, the 55-nm Cub nanoparticles caused 100% suppression of luminescence in a series of dilutions to concentrations of 0.0125 M, 50 and 20% suppression in concentrations of 0.003 and 0.002 M; in contrast, 97-nm Cu a nanoparticles exhibited lower toxicity, resulting in 100% inhibition of bioluminescence at concentrations ranging from 4 to 0.025 M. Dilutions that had no significant impact on the luminescence of bacteria and can be characterized as biotic doses began with 0.00078 M and below.When the study of the copper nanoparticles toxicity on mammals 29 , it was found out that it exceeds EC50 values for microorganisms, that makes it possible to use them as antibacterial drugs.
Possible approaches for understanding the mechanism of toxic action of copper nanoparticles on E. coli are associated with increasing electron charge density in the outer membrane of E. coli in contact with copper nanoparticles.This is correlated with their ability to inhibit the growth of bacteria and lower the activation energy of electron transfer at the site of nanoparticle-E.coli contact.The analysis of these data allows us to adjust conditions for obtaining samples of nanoparticles with desired properties of oxide film.
In its turn, electrostatic contact between the positively charged aggregates of copper nanoparticles (ae = +15.9±8.63MB) with the negatively charged surface of E. coli K12 MG1655 pSoxS::lux and pKatG::lux with the inducible nature of the illumination (ae=-50.0±9.35mV) showed the development of oxidative stress in model microorganisms.It was presumably determined by the transport of electrons through copper nanoparticles integrated with cytoplasmic membrane to molecular oxygen.The final result of this process was damage to DNA by reactive oxygen species.It was detected using the reporter strain E. coli pRecA::lux leading to the development of bactericidal effect 30 .
Zinc nanoparticles also inhibited bioluminescence, although several orders of magnitude lower concentration were required to achieve an inhibitor effect similar to that of copper.In the range of concentrations with the highest toxicity against Escherichia coli K12 TG1 were observed in concentrations from 4 to 2x10 -5 .This toxicity can be characterized as acute and able to cause complete suppression of luminescence of the microorganisms even being in very small doses.Our results are consistent with those of other authors who studied the inhibition of bioluminescence by zinc nanoparticles in comparison with copper.Ko et al. (2014) 31 showed that the toxicity of zinc oxide nanoparticles is greater than that of copper oxide nanoparticles.Mortimer et al. (2008) 20 also showed the high toxicity of zinc oxide nanoparticles compared to copper oxide nanoparticles in terms of bioluminescence inhibition.
The combined use of copper and zinc nanoparticles leads to the potentiation of the toxic effect of copper nanoparticles.In the presence of zinc and copper nanoparticles, EU50 increases from 15 times (when compared with Cu (a, b, Cu-Zn mixture, alloy) to 66 times at all stages of contact (i.e., toxicity is characterized by prolonged manifestation).Similar results were obtained when evaluating the effects of copper oxide and zinc nanoparticles on the other test objects such as infusoria and rats.Thus, introducing zinc oxide nanoparticles into the environment of cell line HepG2 modulates the cytotoxicity of copper nanoparticles.The accumulation of a large number of zinc nanoparticles in cells alters cell membranes, and the cytotoxicity of copper nanoparticles increases 32 .The mechanism of toxic action is associated with an increased intracellular Zn 2+ concentration, leading to the excessive generation of intracellular ROS, plasma membrane leakage, mitochondrial dysfunction, and cell death along with the increased production of ROS, damage to lysosomal membranes, and activation of caspase-3 and caspase-7, eventually leading to apoptosis 33 Alloy and mixture of metal antagonists of Zn and Cu show less toxicity compared to individual testing Zn.
CONCLUSION
The combined use of zinc and copper reveals the potential of antimicrobial copper to reduce the toxic effects of zinc.Acting as a functional antagonist of copper, the presence of zinc in the nanoparticle inhibits the peroxidation of lipids and generates antioxidant activity, which seems promising.
Thus, the level of toxicity of the nanoparticle samples that was characterized by EU50 values using genetically engineered luminescent strain of E. coli decreased in the series: Zn → Cu-Zn (alloy) → Cu-Zn (mixture) → Cu b → Cu a .The toxicity of copper nanoparticles is higher in particles with smaller sizes and has a prolonged effect.
Nanoparticles of Cu-Zn alloy and mixture have intermediate toxicity due to the leveling of the toxic effect of zinc and its potentiation of copper and provide evidence for the possibility of creating innovative antibacterial drugs based on nanoparticles of copper and zinc because of their low toxicity and prolonged periods of antibacterial effect.
Fig. 4 .
Fig. 4. The relative values of luminous intensity for a luminescent strain of E. coli in contact with nanoparticles.The ordinate is the relative value of luminescence intensity in comparison with the control
Table 2 .
Values of EC50 (M) for the test organism E. coli K12 TG1with cloned luxCDABE genes of P. leiongnathi 54D10 contacting nanoparticles of copper and zinc along with their alloy and mixture | 3,564.4 | 2015-09-25T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Investigating the Opacity of Verb-Noun Multiword Expression Usages in Context
This study investigates the supervised token-based identification of Multiword Expressions (MWEs). This is an ongoing research to exploit the information contained in the contexts in which different instances of an expression could occur. This information is used to investigate the question of whether an expression is literal or MWE. Lexical and syntactic context features derived from vector representations are shown to be more effective over traditional statistical measures to identify tokens of MWEs.
Introduction
Multiword expressions (MWEs) belong to a class of phraseological phenomena that is ubiquitous in the study of language (Baldwin and Kim, 2010). Scholarly research in MWEs immensely benefit both NLP applications and end users (Granger and Meunier, 2008). Context of an expression has been shown to be discriminative in determining whether a particular token is idiomatic or literal (Fazly et al., 2009;Tu and Roth, 2011). However, in-context investigation of MWEs is an underexplored area.
The most common approach to treat MWEs computationally in any language is by examining corpora using statistical measures (Evert and Krenn, 2005;Ramisch et al., 2010;Villavicencio, 2005). These measures are broadly applied to identifying the types 1 of MWEs. While there is ongoing research to improve the type-based investigation of MWEs (Rondon et al., 2015;Farahmand and Martins, 2014;Salehi and Cook, 2013), the challenge of token-based identification of MWEs (as in tagging corpora for these expressions) requires more attention (Schneider et al., 2014;Brooke et al., 2014;Monti et al., 2015).
In this study, we focus on a specific variety of MWEs, namely Verb + Noun combinations. This type of MWEs doesn't always correspond to fixed expressions and this leads to computational challenges that make identification difficult (e.g. while take place is a fixed expression, makes sense is not and can be altered to makes perfect sense). The word components in such cases may or may not be inflected and the meaning of the components may or may not be exposed to the meaning of the whole expression. This paper outlines investigation of MWEs of the class Verb + Noun in Italian. Examples of these cases in Italian are fare uso 'to make use', dare vita 'to create' or fare paura 'to frighten'.
We propose a supervised approach that utilises the context of the occurrences of expressions in order to determine whether they are MWEs. Having the whole corpus tagged for our purpose of training a classifier would be a labour-intensive task. A more feasible approach would be to use a specialpurpose data, labeled with concordances containing Verb + Noun combinations. We report the preliminary results on the effectiveness of context features extracted from this special-purpose language resource for identification of MWEs.
We differentiate between expressions whose instances occur with a single fixed idiomatic or literal behaviour and the ones that show degrees of ambiguity with regards to potential usages. We partition the dataset in a way to account for both of these groups and the experiments are run separately for each.
To extract context features, we use a word embedding approach (word2vec) (Mikolov et al., 2013) as the state of the art in the study of dis-tributional similarity. We extract features from the raw corpus without any pre-processing. While we report the results for Italian, the approach is language-independent and can be used for any resource-poor language.
Motivation
It is important to consider expressions at the token level when deciding if they are MWEs. The reason being, there are expressions that in some cases occur with an idiomatic sense whereas with a literal sense in others. This could be determined by the context in which they appear. For example take the expression play games. It is opaque with regards to its status as an MWE and depending on context could mean different things. For example in He went to play games online it has a literal sense but is idiomatic in Don't play games with me as I want an honest answer. A traditional classification model that is blind to linguistic context proves to be insufficient in such cases. The following is an example of the same phenomenon in Italian which is the language of interest in this study: 1) Per migliorare il sistema dei trasporti, si dovrebbero creare ponti anche verso e da le isole minori.
'In order to improve the transportation system, the government should build bridges both to and from the smaller islands.' 2) Affinch possiamo migliorare la convivenza fra popoli diversi, bisognerebbe creare ponti, non sollevare nuovi muri! 'In order to improve coexistence among different people, we should build bridges not raise new walls!'
Related Work
With regards to context-based identification of idiomatic expressions, Birke and Sakar (2006) There is some recent interest in segmenting texts (Brooke et al., 2014;Schneider et al., 2014) based on MWEs. Brook et al. (2014) propose an unsupervised approach for identifying the types of MWEs and tagging all the token occurrences of identified expressions as MWEs. This methodology might be more useful in the case of longer idiomatic expressions that is the focus of that study. Nevertheless for expressions with fewer words, the aforementioned challenges regarding opacity of tokens limit the efficacy of such techniques. The supervised approach posited by Schneider et al. (2014) results in a corpus of automatically annotated MWEs. However, the literal/idiomatic usages of expressions have not been dealt with in particular in their work.
The idea behind our work is to use concordances of all the occurrences of a Verb + Noun expression in order to decide the degree of idiomaticity of a specific Verb + Noun expression. Our work is very related to the work of Tu and Roth (2011), in that they have also particularly considered the problem of in-context analysis of light verb construction (as a specific type of MWEs) using both statistical and contextual features. Their approach is also supervised, but it requires parsed data from English. Their contextual features include POS tags of the words in context as well as information from Levin's classes of verb components. Our approach requires little pre-processing and is best suited for languages that lack ample tagged resources. The present study is in the same vein as the approach taken by Gharibeh et al. (2016). Here, we have specifically analysed expressions that have more ambiguous usages, running separate experiments on partitions of the dataset.
Methodology
Our goal is to classify tokens of Verb + Noun expressions into literal and idiomatic categories. To this end, we exploit the information contained in the concordance of each occurrence of an expression. Given each concordance, we extract vector representations for several of its words to act as syntactic and lexical features. Compared to literal Verb + Noun combinations, idiomatic combinations are expected to appear in more restricted lexical and syntactic forms (Fazly et al., 2009). One traditional approach in quantifying lexical restrictions is to use statistical measures. (Ramisch et al., 2010).
We target syntactic features by extracting vectors for the verb and the noun contained in the expression. Here we extract the vectors of the verb and the noun components in their raw form hoping to indirectly learn lexical and syntactic features for each occurrence of an expression. We believe that the structure of the verb component is important in extracting fixedness information for an expression. Also, the distributional representation of the noun component is informative since Verb + Noun expressions are known to have some degrees of semi-productivity (Stevenson et al., 2004).
Additionally, we extract vectors for cooccurring words around a target expression. Specifically, we focus on the two words immediately following the Verb + Noun expression. We expect the arguments of the verb and the noun components that occur following the expression to play a distinguishing role in these kinds of so-called complex predicates 2 (Samek-Lodovici, 2003).
The word vectors in this study come from the Italian word2vec embedding which is available online 3 . The generated word embedding approach has applied Gensim's skipgram word2vec model with the window size of 10 to extract vectors of size 300 for Italian words from Wikipedia corpus.
In order to construct our context features, given each occurrence of a Verb + Noun combination we concatenate four different word vectors corresponding to the verb, noun, and their two following adjacent words while preserving the original order. In other words, given each expression, the context feature consists of a combined vector with the dimension of 4 * 300 = 1200.
Concatenated feature vectors are fed into a logistic regression classifier. The details with regards to training the classifier are explained in Section 6.
Experimental Data
The data used in this study is taken from an Italian language resource for Verb + Noun expressions 2 Most of the Verb + Noun expressions that we investigate belong to the category of complex predicates which is the focus of Samek-Lodovici (Samek-Lodovici, 2003) 3 http://hlt.isti.cnr.it/wordembeddings/ (Taslimipoor et al., 2016). The resource focuses on four most frequent Italian verbs: fare, dare, prendere and trovare. It includes all the concordances of these verbs when followed by any noun, taken from the itWaC corpus (Baroni and Kilgarriff, 2006) using SketchEngine (Kilgarriff et al., 2004). The concordances include windows of ten words before and after an expression; hence, there are contexts around each Verb + Noun expression to be used for the classification task 4 . 30, 094 concordances are annotated by two native speakers and can be used as the gold-standard for this research. The Kappa measure of interannotator agreement between the two annotators on the whole list of concordances is 0.65 with the observed agreement of 0.85 (Taslimipoor et al., 2016). Since the agreement is substantial, we continue with the first annotator's annotated data for evaluation.
Partitioning the Dataset
The idea is to evaluate the effect of context features to identify the literal/idiomatic usages of expressions, particularly for the type of expressions that are likely to occur in both senses. In our specialised data, around 32% of expression types have been annotated in both idiomatic and literal form in different contexts. For this purpose, we divide the data into two groups: (1) Expressions with a skewed division of the two senses (e.g., with more than 70% of instances having either a literal or idiomatic sense). 5 (2) Expressions with a more balanced division of instances (e.g., with less than or equal to 70% of instances having either a literal or idiomatic sense).
We develop different baselines to evaluate our approach on these two groups as explained in the following section.
Majority baseline
We devise a very informed and supervised baseline based on the idiomatic/literal usages of ex-pressions in the gold-standard data. According to this baseline a target instance vn ins , of a test expression type vn, gets the label that it has received in the majority of vn occurrences in the gold-standard set. The baseline approach labels all instances of an expression with a fixed label (1 for MWE and 0 for non-MWE). This is a high precision model when working with Group 1, due to the more consistent behaviour of instances there. However, its results are suitable for evaluating the results of our developed model over expressions of Group 2.
Association measures as a baseline
The data in Group 1 include the expressions that mostly occur in either idiomatic or literal forms. These expressions are commonly categorised as being MWE or non-MWE using association measures. Association measures are computed by statistical analysis through the whole corpus, hence the values are the same for all instances of an expression. In other words, these methods are blind to the contexts in which different instances of an expression could occur. To evaluate our model over data in Group 1, these association measures are used as features to develop a baseline. We focus on two widely used association measures, log-likelihood and Salience as defined in SketchEngine. We also use frequency of occurrence as a statistical measure to rank MWEs. The statistical measures are computed using SketchEngine on the whole of itWac. The statistical measures are then given to an SVM classifier to identify MWEs.
Evaluation Setup
There are 1, 480 types of expressions with 28, 483 occurrences in Group 1 and 169 types of expressions with 1, 611 occurrences in Group 2. For each group, we extract context features to train logistic regression classifiers.
Our proposed context features are vector representations of the raw form of the verb component, the raw form of the noun component and a window of two words after the target expression. We refer to the combination of these vectors as the Context feature. We apply a 5-fold cross validation approach to compute accuracies for each classifier. We split the dataset into five separate folds so that no instance of the same expression could occur in more than one fold. This is to make sure that the test data is blind enough to the training data. The classifiers are compared against the baselines using different features. The results are reported in Tables 1 and 2. Table 2 shows the results of our model over data in Group 2 compared to the majority baseline. Recall that the data instances in Group 2 are highly unpredictable in their occurrence as MWE or non-MWE. We expect that our supervised model using context features (Context) be able to disambiguate between different instances of an expression. Here, our model performs slightly better than the informed majority baseline. Statistical measures are expected to be promising features when identifying MWEs among expressions with consistent behaviour. However, the results in Table 1 show that our Context features are more effective in MWE classification even when applied over Group 1 and also over the whole data.
Results and Analyses
The good performance when using word context features leads us to think that their usefulness can be attributed to the information obtained from external arguments of the verb and the noun constituents of expressions. More experiments need to be done to confirm this and also to find the best suitable window size for the word context around a target expression 6 .
We have also trained the logistic regression model with the combination of the Context features and the association measures in Table 1. According to these results, the combination of features improves the accuracies of our model in identifying idiomatic expressions specially when applied over the consistent data in Group 1. The results lead us to believe that context features are even more useful in cases where we expect the best result from statistical measures due to the more consistent behaviour of the data. The better performance when using Context and statistical measures together, compared with when we use Context features alone is also a remarkable observation visible at Table 1. Our experiment using the combination of Context and Salience (as the best statistical measure) for training over Group 2 expressions (Table 2), shows that the statistical measure is not helpful for the class of ambiguous expressions.
Conclusions and Future Work
We investigate the inclusion of concordance as part of the feature set used in supervised classification of MWEs. We have shown that context features have discriminative power in detecting literal and idiomatic usages of expressions both for the group of expressions with high potential of occurring in both literal/idiomatic senses or otherwise. Our results suggest that, when used in combination with traditional features, context can improve the overall performance of a supervised classification model in identifying MWEs.
In future, we intend to consider incorporating linguistically motivated features into our model. We will also experiment with constructing features that would consider long-distance dependencies in cases of MWEs with gaps in between their components. | 3,641.4 | 2017-01-01T00:00:00.000 | [
"Linguistics"
] |